narkata

AMD Naujienos

Rekomenduojami pranešimai

Pagaliau AMD išleido savo ReLive (nvidia Shadowplay analogas). Nereikalauja registracijos, mažiau nukenčia fps nei su shadowplay, daug priklauso ir nuo žaidimo, bet pvz didžiausią skirtumą mačiau Dirt Rally, nvidia fps krito 10.9%, AMD 3.9%. Pagaliau AMD atsitiesia su softu ir jau pradeda lenkti Nvidia, nes Nvidia na šiaip ir pati pradėjo sušikti savo driverius.

http://www.amd.com/en-us/innovations/software-technologies/radeon-software/gaming/radeon-relive

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

Atrodo AMD sukure konkurenta GTX1080. AMD Vega. Vaizdas 4K ir virs 60FPS.

 

Redagavo Magnitas

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

ne pirmas ir ne paskutinis komercinis parodymas reklamai.ziurek kad isejus tai serija bent 1070 lygi pasiektu :) siaip laikais tiketi tokiais paistalais tik naivus gali.iseis,istestuos daug zmoniu,tada galima bus sakyt kad sukure konkurenta,bet ir tai,nvidia laiko uzanty 1080ti :)

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

Aha, nesenai žiūrėjau vieną apžvalgą tokio moniko, bet realiai reikia manau pačiam pamatyti kaip atrodo.

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

AMD sets stage for next-gen displays with FreeSync 2

 

AMD is today lifting the lid on its next-generation display technology, FreeSync 2.

Described as the "next major milestone in delivering smooth gameplay and advanced pixel integrity to gamers," FreeSync 2 expands on the initial promise of a tear- and stutter-free experience by adding High Dynamic Range (HDR) enhancements into the mix.

86744df1-1998-4700-848f-59d9e9eb8bcf.png

Understanding that the HDR gaming experience on PC is at risk of lagging behind the latest games consoles, AMD cites "higher-than-acceptable" latency as a drawback to current HDR setups. The firm's workaround is to offload tone mapping from the display to the Radeon GPU, allowing the game engine to tone map directly to the display's target luminance, contrast and colour space using new FreeSync 2 APIs.

Any cut in latency is good news for gamers, yet while HDR is one to watch for the future, existing FreeSync users will be pleased to learn that the certification requirements have been tightened for FreeSync 2. This time around, display manufacturers wanting to employ the technology will required to support Low Framerate Compensation (LFC), whereby the maximum framerate is at least 2.5x the minimum.

c866485a-2635-45a3-aece-661f5f80d4a7.png

Low latency, smooth gaming and the vibrance of HDR seems a perfect formula, though there are potential hurdles to overcome. A more demanding certification process is likely to increase costs, it remains to be seen how many game developers will choose to harness the HDR APIs, and with AMD making a play for the high-end market, we don't imagine the first batch of FreeSync 2 displays will be cheap.

Suggesting that the refreshed technology may initially be the preserve of the enthusiast, AMD has confirmed that FreeSync and FreeSync 2 will co-exist on the market. The good news for gamers contemplating an upgrade is that all FreeSync-compatible Radeon GPUs will offer full support for FreeSync 2 and there shouldn't be too long to wait as the first qualifying monitors are expected to arrive within six months.


Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

Introduction

fcc88a62-5b5f-4bfa-9d5f-5e46a2809cf9.png

AMD today took wraps off a processor family that constitutes arguably the company's most important launch in years. Codenamed Naples, that family is now known as AMD Epyc.

Epyc represents AMD's firm intentions to return to the x86-based server market dominated by Intel after the effective demise of the Opteron chips from a heyday in 2003.

Appreciating that Intel holds a near monopoly in server-class CPUs and chipsets, AMD has much to gain and little to lose.

We'll discuss the various models, talk through the architecture underpinning Epyc, evaluate benchmarks provided by AMD on a Tech Day yesterday, and then see if Epyc, as an ecosystem, has the wherewithal to challenge the incumbent Intel Xeon chips that hold market-share hegemony.

Model Numbers

e9e2ef85-7cf3-4b74-91d8-3b733344d231.png

Daugiau Info

Going from the highest levels of details and working our way in, AMD productises Epyc into a 7000-series family consisting of nine processors ranging from eight cores and 16 threads through to 32 cores and 64 threads. Targetting the meat of the server and premium workstation market means that two Epyc processors can be run on a single board, or 2P in server parlance, offering up to 128 threads in a top-level configuration. AMD's research indicates that 95 per cent of x86-based servers fall into the 1P-2P category.

The meagre core count of the Epyc 7251 processor is deliberate in two ways: it provides the cheapest path to these server processors and is designed for systems where the per-core cost of licensing relevant software is a budgetary concern.

Rising up the stack brings with it the goodness of more cores/threads and speeds. Epyc 7281, 7301 and 7351 all share the same 16C/32T topology but run at different base and boost speeds as well as different power budgets. The reason for the latter, explained AMD's Scott Aylor, is down to the speed of the memory controller resident within the processor, as operating it at DDR4-2666 consumes more power than, say, DDR4-2400, hence the 180W and 155W ratings, respectively.

We can conjecture that AMD is binning Zen cores at the wafer level in order to determine those with the best frequency-to-voltage characteristics. These are then kept for Epyc chips, enabling up to 32 cores to run at solid speeds whilst the whole package consumes less than 200W.

The fastest, and most expensive, trio all house 32 cores in what is known as a multi-chip module. Again, final speeds dictate pricing and comparison against extant Intel Xeon processors.

2aa3fe08-fcee-40ae-b322-d09312968b60.png

AMD's research indicates that 25 per cent of servers ship with just a single CPU in situ, so for markets where only a single processors is necessary - and workstation and entry-level server springs to mind - AMD is also offering a trio of Epyc chips that are fused at the factory to work as uniprocessors. The Epyc 7551P, 7401P and 7351P are otherwise identical to their 2P-capable brethren though are priced lower in order to gain market share in these environments.

Every Epyc processor uses an SP3 LGA socket - where the pins are on the motherboard, not the processor, contrary to desktop Ryzen - and AMD has confirmed that two future, improved Epyc models, codenamed Rome and Milan, will also maintain socket compatibility for simpler upgrading.

Though we call them processors, it would be more accurate to refer to Epyc as an SoC (system on chip) as each package integrates all of the IO functionality and memory controllers. Speaking of memory, a fully-populated Epyc chip can handle two Dimms for each of its eight channels, so 16 in total, or 2,048GB (2TB) when using 128GB sticks.

We'll see just how the motherboard guys get around creating standard-sized boards for two massive Epyc processors, a potential 32 Dimms for memory, and numerous slots for IO.

 

Architecture, Implementation

Epyc is based on the Zen architecture that is also the foundation for the Ryzen desktop CPUs. As you may know, those parts top out eight cores and 16 threads, with the assumed, upcoming Threadripper CPU doubling that count, though given the socket change, server-optimised Epyc and client Ryzen 7/5/3 chips will not be interoperable. AMD has since confirmed that, from a motherboard perspective, Threadripper and Epyc are also incompatible.

That isn't to say they don't have significant commonality. The core building blocks behind Epyc use the same Zen core as Ryzen, so it's worth refreshing your knowledge by heading over to our introductory article right over here. And if you want a simpler eye chart to see how the Zen core compares with previous Bulldozer and Intel's Broadwell, feast your peepers on this slide.

What's ostensibly different is how Epyc is distinct from an implementation point of view, especially as the core count scales up to 32 in the premier parts. Let's go from the outside in and start with IO first.

Lots of IO

13d3d517-2e2d-4be0-913b-822df9b1f8e8.png

Every Epyc processor possesses 128 lanes of IO, compared with 40 lanes for the latest Broadwell-based Xeons. This means that one can drive a huge number of eclectic devices from a single chip. For example, 64 lanes can be used for, say, four full-bandwidth graphics accelerators, you can chuck another 32-odd lanes for premium storage options, and so on.

Intel's current generation cannot touch this amount of IO, clearly, but AMD's advantage isn't as huge as the base numbers would suggest, as a number of lanes would be reserved for base connectivity such as networking, Sata, etc. Intel encounters the same problem but gets around it by adding 20 or so lanes from its PCH 'southbridge'. The end result, still, is that Epyc enjoys a real-world 2x IO advantage in a 1P environment. That advantage means that fewer on-motherboard switches need to be used to expand lane counts, thus simplifying motherboard design and potentially lowering cost.

fb241fe7-d67b-4e26-b333-7cd79730bbd7.png

These IO lanes can be used for either PCIe (8Gbps), Sata (6Gbps), or grouped together for Infinity Fabric as a chip-to-chip interconnect in a 2P system. In that case, each processor reserves the equivalent of 64 PCIe lanes to connected to one another (128 in total, therefore), hence reducing the potential amount of IO from 256 to the same 128 lanes, as shown in the above picture.

Having heaps of IO being fed in and out of the processor inevitably puts strains on intra-chip and memory bandwidth, so a balanced design needs lots of both to ensure that IO doesn't become a bottleneck.

Moving into the chip - the need for four dies for all Epyc CPUs

84c9059b-4fac-485d-aae9-648adb653d88.png

Here is a simplistic view of an Epyc chip, comprised of four dies in a multi-chip module. It is important to understand that all Epyc chips, regardless of the number of stated cores, are built this way.

Making it easy to understand, each of the four dies holds the equivalent of a Ryzen 7 processor. This means two CCX units - each holding four cores and an associated L3 cache - are connected to one another via intra-chip Infinity Fabric. Each two-CCX die has its own, individual dual-channel memory controllers. Adding all this up means that a fully-populated Epyc chip has eight CCXes, 32 cores, and an aggregate of eight-channel memory run at a maximum of DDR4-2666 with one Dimm per channel and DDR4-2400 with two Dimms per channel.

As memory bandwidth is key to solid performance in the datacentre and only two channels are connected to each die, the eight- and 16-core Epyc chips have to use all four dies. Reinforcing what we said above, this also means that all Epyc chips share the same silicon topology - there is no way to get the required level of bandwidth in an MCM setup than by going down this road.

As well as intra-die CCXes connected via Infinity Fabric, each die in turn is also connected to every other one via that Infinity Fabric, operating at 42GB/s bi-directionally, adding up to 170.4GB across four dies. Remember that number.

21c1be7a-1a58-4d96-a027-357eb98b4966.png

Now let's see if Epyc is a balanced MCM design by looking at effective memory bandwidth and also IO speed.

There's a total 170.4GB/s of aggregated memory bandwidth, too, as each of the eight channels, operating at a peak 2,666MT/s, can shift this amount into the chip. Note that this is for all dies going full chat, as technically each one only has access to two memory channels.

Looking at a 2P system, the inter-chip Infinity Fabric offers up a potential 152GB/s between processors, intimating that all the theoretical numbers stack up, that is, show no obvious signs of bottlenecking.

There may be some traffic IO-to-memory bottlenecks in a 1P environment where almost all of the 128 PCIe 3.0 runs are used for super-fast storage; each die has 64GB/s of PCIe bandwidth and, as we have seen, 'only' 42.6GB/s of memory transfers available.

Point is, building an efficient MCM chip with lots of IO hanging off it also requires tonnes of memory bandwidth and lots of intra- and inter-die bandwidth, as well inter-chip speed. Epyc would not have been possible without Infinity Fabric there to hook it all up.

 

Power optimisations

Thinking back to the Zen architecture itself, AMD has endowed it with thousands of sensors that monitor temperature, voltage, power and frequency. Each of these is plumbed into algorithms that then set the optimum balance between frequency and speed at any given point in time. This is usual for modern processors, but where AMD says it differs is with respect to the granularity and speed of the on-the-fly changes.

84295a1a-622f-417a-9283-682d4b59cda4.png

The reason we mention this is because such precise adjustments are far more valuable in a server space where shaving off a few watts here and there from a rack can add up to significant power savings across the datacentre, because even with all the other components present in a box - memory, NIC(s), HDD, fans, etc. - the CPU(s), understandably, consume over 50 per cent of the total power of a standard, GPU-less 2P server.

Therefore, each of the up to 32 cores within Epyc has its own regulator that governs voltage for a given frequency. As you would expect, silicon doesn't have perfectly even characteristics across the die, meaning that some cores will require higher voltage than others to run at a particular speed. Using the latest adaptive voltage frequency scaling (AVFS) AMD reckons that each core's voltage can be tuned to within 2mV for the desired frequency, or with more granularity than the regulator itself allows.

20546134-4bbd-4cc7-95a5-fa7084cc5d47.png

Interestingly, the TDPs quoted on the first page, ranging from 120W through to 180W, are configurable through the BIOS for an OEM with enough knowledge of cooling. Take the top-bin Epyc 7601 as an example. Where power consumption is less of a concern, that chip can be hiked to 200W with the extra energy driven towards higher speeds and voltage. Of course, the gains will be limited, because AMD already has an optimum speed/voltage curve, but it's possible to eke out that bit more performance.

The same chip can be driven at a lower voltage/speed in order to increase the performance-per-watt metric in power-constrained environments, too, and this is why each processor has a range. Final boost speeds will depend upon just how this configurability is adopted, but you can have a reasonable range of curves for each chip.

3e987d27-2aa0-4c24-a1a2-bbeee56bd54d.png

Remember that aggregate 128-link Infinity Fabric interconnect between chips in a 2P environment? Lighting up 150GB/s of data traffic doesn't help with respect to power consumption, understandably so in cases where the workload is far more bound by compute, this link dynamically reduces speed and voltage in order to save power and then reinvests it into more per-chip compute. It's an obvious way of ensuring that each watt is used sensibly.

Security

No exposition of a modern server processor would be complete without touching upon security.

2a9f9886-05dc-40ed-bc99-5fc8bbeafcc4.png

AMD adds in an ARM-based Cortex-A5 Secure Processor within the silicon of every Zen core. This little chip's job is to provide hardware-based support for two new technologies called Secure Memory Encryption (SME) and Secure Encrypted Virtualisation (SEV).

SME offers real-time memory encryption at boot time. It works by marking pages of memory as encrypted through the page-table entries. What this means is that any kind of memory can be AES-encypted to mitigate against physical memory attacks. The memory isn't hidden in any meaningful way, of course, but any snooping or accesses will show it as encrypted. A single encryption key is generated and stored on chip. AMD says that such encryption, run via a couple of AES engines, causes minimal access-latency increases.

d0de5c45-5f81-43e4-a340-5819d8850861.png

Leveraging the increasing number of cores and threads within a modern server means that multiple virtual machines can be run on it. Software known as hypervisors emulate the hardware and enable these virtual machines to function. So, for example, 'your' remote server may be one of a number of servers, virtualised, and run on a single physical machine in the cloud. Ensuring that VMs are protected from compromised hypervisors is becoming increasingly important.

This is where the SEV technology comes in, according to AMD. SEV encrypts parts of the memory shared by virtual machines - this is your 'part' of the server - and issues a unique key to each VM. The point is, compromised hypervisors know that multiple guest VMs are running but are not able to access their contents because their memory is cryptographically isolated.


Sakyčiau neblogi CPU bus servams iš AMD su 128 PCIe lanes 
Matau vaizdą kaip tas CPU dirbtu su 8 vaizdo kortom su PCIe x16
Bet nors pati silpniausia paėmus CPU vistiek turės 128 PCIe lanes palaikimą (įdomu kokia kaina bus)

 

Redagavo narkata

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

serverinius cpu perkantys asmenys ar kompanijos nera apsisnargleja tipeliai su devizu: ,,perku tik intel", ar nueja i parduotuve neklaus pas pardaveja koki cpu parekomenduotu, jie rinksis kas geriau jiems, o amd taikysis i savo pelno aukso viduriuka pagal savo cpu galimybes

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

2 kartos AMD Ryzen CPU info pasirode 

 

AMD-Ryzen-2000-tecnologias-1.png

 

AMD-Ryzen-2000-tecnologias-2.png

 

AMD-Ryzen-2000-modelos.png


Ryzen-7-2700x-vs-Ryzen-7-1800X.png


AMD-Ryzen-2000-Precios.png

Redagavo scalman

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

https://www.guru3d.com/news-story/amd-announces-rx-5000-series-graphics-processors-at-computex-demos-rx-5700.html

https://www.guru3d.com/news-story/amd-announces-ryzen-7-3700x3800x-and-ryzen-9-3900x.html

 

Nu va, jei kas praleidot, tai AMD pagaliau (o gal kaip tik laiku) iškosėjo naują Ryzen seriją ir Navi. Navi serija vadinsis Radeon RX 5xxx, kas kažkiek primena Radeon HD 5xxxx laikus. :)

Dalintis šiuo pranešimu


Nuoroda į pranešimą
Dalintis kituose puslapiuose

Prisijunkite prie pokalbio

Jūs galite rašyti dabar, o registruotis vėliau. Jeigu turite paskyrą, prisijunkite dabar, kad rašytumėte iš savo paskyros.

Svečias
Parašykite atsakymą...

×   Įdėta kaip raiškusis tekstas.   Įdėti kaip grynąjį tekstą

  Only 75 emoji are allowed.

×   Nuorodos turinys įdėtas automatiškai.   Rodyti kaip įprastą nuorodą

×   Jūsų anksčiau įrašytas turinys buvo atkurtas.   Išvalyti redaktorių

×   You cannot paste images directly. Upload or insert images from URL.