Jump to content
Sign in to follow this  
Malevolence

AMD & Nvidia Desktop Graphics Cards Thread

Recommended Posts

Nvidia GeForce GTX 560 Ti with 448 Cores Launched

NVIDIA released its newest graphics card model specifically for the winter shopping season, the limited edition GeForce GTX 560 Ti 448 cores. Not only is this a limited edition launch, but also targeting only specific markets in North America and Europe. This includes the United States and Canada in North America; and the UK, France, Germany, Russia, and the Nordics in Europe. The new card is based on the 40 nanometer GF110 GPU instead of the GF114 that the regular GTX 560 Ti is based on. This allows NVIDIA to add 64 more CUDA cores (448 vs. 384), 25% more memory (1280 MB vs. 1024), and a 25% wider memory bus (320 bit vs. 256).

 

The new limited edition GeForce GTX 560 Ti 448 cores features clock speeds identical to those on the GeForce GTX 570, at 732 MHz core, 1464 MHz CUDA cores, and 950 MHz (3.80 GHz effective) GDDR5 memory. Since it’s based on the GF110 board, this new card is also capable of 3-way SLI, something the regular GTX 560 Ti isn’t. The card draws power from two 6-pin PCIe power connectors. Display outputs typically include two DVI and a mini-HDMI. Add-in card vendors are free to design their own graphics cards based on this chip, and so expect most GTX 560 Ti 448 core cards to look similar to non-reference GTX 570 ones. ZOTAC, Inno3D, EVGA, Palit, Gainward, ASUS, Gigabyte, and MSI will have graphics cards based on this chip. Prices should typically start at US $289.

33ce.jpg

Share this post


Link to post

EVGA Introduces Two GeForce GTX 580 Classified Ultra graphics cards

 

To spice up the holidays of Nvidia fans, EVGA has developed a couple of 'new' GeForce GTX 580 Classified cards, two 'Ultra' models which come with GPU/shader/memory clocks of 900/1800/4212 MHz (previously-released GTX 580 Classified cards topped out at 855/1710/4212 MHz).

 

 

Beside the ultra-high clocks mentioned, EVGA's latest offerings feature 512 CUDA Cores, a 384-bit memory interface, 3 GB of GDDR5 memory, 4-way SLI and EVBot support, a custom PCB, a 14+3 phase power design, one NEC Proadlizer, Super Low ESR SP-Cap capacitors, and high frequency 3 MHz shielded inductors.

 

The air-cooled GeForce GTX 580 Classified Ultra costs $619.99 while the water-cooled GeForce GTX 580 Classified Ultra Hydro Copper has a price tag of $749.99. Neither card is shipping yet but hopefully they will become available very soon.

 

03gp31595arlg3.th.jpg03gp31595arlg4.th.jpg03gp31595arlg6.th.jpg03gp31595arlg7.th.jpg

03gp31595arlg8ir.th.jpg03gp31595arxl1.th.jpg03gp31595arxl2.th.jpg03gp31595arxl5.th.jpg

 

This makes the most powerful single 40nm GPU card by far!!! Delicious! :D

Edited by Malevolence

Share this post


Link to post

Bye Bye 40nm GPUs! Hello 28nm GPUs!

 

AMD the first to launch 28nm GPU... introducing AMD Radeon HD 7970!!

 

41567178.jpg

62927658.jpg

91511714.jpg

99467239.jpg

22193293.jpg

94686632.jpg

44466057.png

 

AMD Radeon HD 7970 launch is just around the corner. Ahead of its launch, AMD conducted its usual press briefing. DonanimHaber has access to some of the slides shown in that meeting. Earlier this day, we brought you perhaps the most important of them all, specifications. Let's take a look at the reference board design itself. AMD is sticking to the black+red colour scheme, and has come up with a s****y new cooling assembly design. The design, unlike those of higher-end Radeon HD 6000 series graphics cards, is surprisingly curvy and features dashes of red plastic making up its contours, surrounded by tougher black ABS.

 

A welcome change here from the previous generations, is that the card is truly single-slot capable, when say, a single-slot full-coverage water block is used. High-end cards from previous generation HD 5000 and HD 6000 have a dual DVI connector cluster that extends into two expansion slots, which many enthusiasts found to be annoying, especially when setting up benches with four single-GPU graphics cards in scenarios where PCI-Express slot spacing isn't kind. Moving on to display connectivity, the card has one DVI, one HDMI, and two mini-DisplayPort connectors, all arranged in the confines of a single expansion slot. The space of the second slot is dedicated to a hot-air exhaust of the cooling assembly. All board partners are required to ship HDMI-to-DVI dongles, and active mini-DP dongles.

 

http://www.techpowerup.com/156879/AMD-Radeon-HD-7970-Reference-Board-Design-Detailed-Single-Slot-Capable-Finally-.html

Share this post


Link to post

I'm probably not getting one of those then. I wanted one til I saw the display connectors. I might still get one if the GTX660/670 isn't better or at least comparable. I still use a ****load of DVI/VGA so their choices are going to be murder on my setup. Luckily I have an HDMI capable monitor and DVI so I could go there at the very LEAST.

Share this post


Link to post
LeakedTT: NVIDIA to skip 600-series, jump straight to GeForce GTX 780? Did I mention it is nearly twice as fast as the GTX 580?

This was mostly unexpected, but then again, AMD have been waving their big red flag all over the Internet in the last few days due to the leaked AMD Radeon HD 7970 performance and specs. Today, we have PCINLIFE leaking an NVIDIA slide, that shows the difference in performance between their current single-GPU hero, the GeForce GTX 580 versus the GeForce GTX 780.

 

2193601leakedttnvidiato.png

 

The above slide shows the performance between the current GTX 580 and the next-gen Kepler-based GTX 780. The test bed is a Core i7-3960X, Windows 7 64-bit, 297-Series driver, 2560x1600 resolution with both AA and AF enabled. The GTX 780 is virtually twice as fast as the GTX 580 if the chart is correct.

 

Impressive... no wait, there's a better word for this, amazing. No, that's not good enough. CAN O' WHOOP ASS. Yeah, that'll do. But, why the skip from the 600-Series to the 700-Series... well, it would make sense if NVIDIA want to not be affected by AMD's Radeon series, and feel like they're on an even playing field.

 

AMD's next-gen cards are a 7-series (7xx0) and NVIDIA's next-gen, if released in number order, would be the GTX 6xx range. Skipping the 6 and going to 7 would make sense, and if it provides a near-100-percent improvement in performance, NVIDIA can do whatever the heck they want and I'm sure we'll all still love the name.

 

2012 is definitely going to be the year of GPUs. Now we just need some games, or monitor upgrades to make the new GPU power worth it.

 

Remember: This is just a leak, rumor, so TweakTown advise that you're up-to-date with your grains of salt and skill levels of pinching it.

Say what? GeForce GTX 780?? Not GTX 680? Yes!

http://www.tweaktown.com/news/21936/leakedtt_nvidia_to_skip_600_series_jump_straight_to_geforce_gtx_780_did_i_mention_it_is_nearly_twice_as_fast_as_the_gtx_580/index.html

 

 

 

---

 

In another news...

New Radeon Pictures Leaked: HD 7770

First pictures of AMD's mainstream card, HD 7770, have now been leaked online. This card is the first major upgrade to the HD 5770 in two years, since the HD 6770 was just a rebrand. It features the Cape Verde GPU, which replaces the Juniper GPU used in the HD 5770/HD 6770. The card looks somewhat different, with a large fan sitting on top of the GPU, blowing directly onto it and the card's length is the same as the HD 5770, at around 8.25 inches.

clothed18122011.jpgnaked777018122011.jpg

SPECIFICATIONS

  • Three display outputs, DVI, HDMI and miniDisplayPort
  • Single Crossfire connector for 2 cards max
  • The reference PCB has four GDDR5 memory chips, implying a 128-bit memory interface
  • Single 6-pin PCI-E power connector
  • Around 100 Watts power consumption

Finally, as can be seen by the second picture of the naked PCB, the Cape Verde GPU is physically quite small for this level of performance. The old Juniper XT is 166 mm², the GF116 is 230 mm² (NVIDIA GTX 550 Ti), while this GPU looks like it's under 150 mm². Judging by the sizes of the fan and the GPU, this card should be quiet and have good thermals.

 

http://www.techpowerup.com/157022/New-Radeon-Pictures-Leaked-HD-7770.html

Edited by Malevolence

Share this post


Link to post
AMD Radeon HD 7900 Key Features Listed

We've already been through the specifications of HD 7970 "Tahiti" in some detail that matters to those who can draw a performance hunch looking at them. This latest slide shows you the feature-set this GPU comes with. To begin with, there are three main categories of feature updates: Graphics CoreNext, AMD Eyefinity 2.0, and AMD APP Acceleration. AMD claims CoreNext to be a "revolutionary" new architecture that changes the way the GPU crunches numbers.

 

For the past five generations (since Radeon HD 2000), AMD GPUs have used the VLIW (very-long instruction word) core arrangement. Even the latest VLIW4 introduced by Radeon HD 6900 series, was an evolution, than a revolution of that. CoreNext replaces VLIW stream processors with super-scalar Graphics Compute cores. This should translate to higher performance per mm² die-area, resulting in smaller GPUs, giving AMD room for greater cost-cutting if the competition from NVIDIA for this generation takes effect. The GPU itself is built on TSMC's new 28 nm silicon fabrication process. Next up, AMD confirmed support for PCI-Express 3.0 interface, that nearly doubles system bus bandwidth over the previous generation.

 

75al.jpg75bm.jpg

 

To manage power, there's AMD's PowerTune technology that adjusts clock speeds both ways (both below and above specifications), to remain within a set power envelope. Then there's talk of ZeroCore technology, a feature that completely turns off the GPU when the operating system instructs it to "turn off monitors" when idling. This reduces power draw of the graphics card to under 3W, that's a 10X idle power draw reduction over the previous generation. This feature is also said to send non-primary GPUs when in a CrossFire setup (the second, third and fourth) to ZeroCore state while keeping the primary GPU (to which displays are connected) awake, when the user isn't running 3D-intensive tasks.

 

Moving on to Eyefinity 2.0, there's talk of some big changes. First is DDM audio, a directional audio system that outs independent 7.1 channel audio streams through each of the HDMI and DisplayPort connectors. When you move an audio output-producing application's window from one display to another, its audio stream dynamically shifts from one connector to another. A practical application of this could be running an extra-long HDMI cable from your PC to your living room. We'll leave your creativity to do the rest of the thinking. Eyefinity 2.0 will keep support for 5 x 1 landscape arrangement. The system will now support HD3D, display of stereoscopic 3D from each member display while compensating for the display's angle to the user's perspective and bezel.

 

Moving on, the APP runtime gets some feature updates that take advantage of the more powerful GPU compute architecture these GPUs come with. It will be designed for [heavy] computing loads (bitcoin magnets?), will feature SteadyVideo 2.0, which will further enhance videos after compensating the camera shake, Improved UVD could mean more video decoding features. Home 3D video is the next big thing, so the new UVD engine could address some performance issues related to that. Finally AMD will expand its developer ecosystem, more applications will support AMD APP.

 

http://www.techpowerup.com/157037/AMD-Radeon-HD-7900-Key-Features-Listed.html

Share this post


Link to post

I wonder what happened to Malevolence lately, but well, time to continue with the new tech stuff.

 

Here comes the new NVIDIA GeForce GTX 680:

 

474015_386837848002345_130554466964019_1531959_325770054_o-580x448.jpg

 

 

NVIDIA’s Kepler-based GeForce GTX 680 arrived this morning with no shortage of promises: faster than the Fermi GPUs of old, but cooler running and more power efficient too. According to the graphics company, the new 28nm Kepler technology can do twice as much graphical magic per watt than previous GPUs, with the ability to drive four monitors from a single card. Plenty of hyperbole, then, but how does the GTX 680 live up in practice? We’ve been crunching through the launch-day reviews; check out our summary after the cut.

 

If glowing praise is what you’re looking for, AnandTech doesn’t disappoint. The GTX 680 ticks their boxes for performance, cool-running and low noise output, and gets extra credit for undercutting AMD’s Radeon competition. If there’s any mark on the scorecard it’s the fact that performance – although higher than the GTX 580, hasn’t seen a significant step up, something they expect to see as the Kepler platform evolves.

 

Bit-Tech, meanwhile, praise the GTX 680′s performance at lower resolutions, pointing out that what could possible by memory bandwidth limitations cut down the framerate advantage over the Radeon HDT 7970 3GB when you jump from 1920 x 1080 to 2560 x 1600. Still, they conclude, “At this price NVIDIA is certainly asking some tough questions of AMD.”

 

It’s an opinion shared by HardOCP, who pushed the new NVIDIA to its multi-screen limits. “We configured an NV Surround triple-display configuration and gamed on three displays from the single GeForce GTX 680″ the site writes, “we wanted to be able to run at the native resolution of 5760×1200 and compare the performance to the Radeon HD 7970 … We were absolutely surprised that the GeForce GTX 680 had no trouble keeping pace with the Radeon HD 7970 at 5760×1200.”

 

Hexus took the overclocking potential of the GeForce to task, something made somewhat confusing since users will need to juggle frequencies and TDP targets to get the best results. Still, they managed to get their review card up to 1,250MHz core and 6,608MHz memory speeds with no apparent stability issues.

 

“To put it simply, the GeForce GTX 680 is the fastest single-GPU based graphics card we have tested to date” Hot Hardware said of the NVIDIA, pegging it 5- to 25-percent faster than the Radeon HD 7970 depending on task. That advantage could well increase, however, with Maximum PC pointing out that the Kepler chips apparently have headroom to increase power consumption and thus performance too.

 

Of all the reviews, perhaps it’s The Tech Report that isn’t so convinced the GTX 680 completely ousts the Radeon HD 7970. Their testing found the NVIDIA suffered some higher frame latencies than the AMD, though the $50 price difference in RRP was enough to make them overlook it, along with the fact that “any difference is probably imperceptible to the average person.”

 

Tom’s Hardware expects AMD to take a sizable knife to the Radeon HD 7970 and 7950 pricing as a result of the competitive GeForce, predicting $100 and $50 cuts respectively. Their testing found a roughly 72-percent improvement in performance per watt versus Fermi, short of NVIDIA’s “double” claim but still impressive.

 

Finally, X-bit Labs takes a balanced view of the whole matter, pointing out that the real winner in the end is the consumer. The new GTX 680 is not only a solid option in its own right – and heralds a new, exciting range from NVIDIA – but it will force AMD to cut prices; fans of both companies’ products will benefit.

 

Source: http://www.slashgear.com/nvidia-geforce-gtx-680-review-roundup-22219611/

Review: http://www.pcmag.com/article2/0,2817,2401953,00.asp

Share this post


Link to post

Here comes the new pricey $849 USD monster dual-core video card from AMD using PCI-express 3.0 (codenamed New Zealand):

 

AMD_RadeonHD_7990.jpg

 

AMD_RadeonHD_7990GPUZ.jpg

 

AMD Radeon HD 7990 Reference Board Pictured, Specs Confirmed in GPU-Z Screenshot

Admittedly, this is a terrible day for news on unannounced GPUs, but we rushed it in anyway. Here are the first board shots of AMD's next-generation dual-GPU graphics card, the Radeon HD 7990 (codename: "New Zealand"). Sources told us that AMD working overtime to release this SKU, to restore performance-leadership of the Radeon HD 7900 series. The dual-GPU card, according to the specifications at hand, is bearing AMD's coveted "GHz Edition" badge, its core is clocked higher than that of the HD 7970.

 

But first, the board shot. Pictured below is the first picture of this beast. Right away you'll question its authenticity for using a 70 mm fan instead of a lateral-flow blower, but that design change serves a purpose. Despite its high performance, the previous-generation Radeon HD 6990 was plagued with user complaints of high noise. That's because a single, normal-sized lateral-flow blower was positioned in the center, blowing through two sets of aluminum channels, at a very high speed. With the HD 7990, AMD on the other hand, borrowed the ventilation design of NVIDIA's GeForce GTX 590, to a large extant. It reused the fan found on reference-design HD 7850 and HD 7770, and placed it in middle of two heatsinks.

 

The picture reveals the card to be fairly long. AMD chose a fancy PCB number to denote "leeeet" (elite), it did a similar word-play with "AUSUM", around the HD 6990. The card is using an AMD-rebadged PLX PEX8747 PCI-Express 3.0 48-lane bridge chip, which features "broadcast" features that makes it fit for dual-GPU graphics cards. Moving on to specifications, the HD 7990 features 1 GHz core clock speed, with 1250 MHz memory. The card has a total of 6 GB GDDR5 memory, 3 GB per GPU. It features completely-unlocked 28 nm "Tahiti XT" GPUs, with 2048 stream processors. It draws power from two 8-pin PCIe power connectors. Display outputs include one dual-link DVI, four mini-DisplayPort connectors. Slated for a "hard-launch" on April 17, AMD's Radeon HD 7990 6 GB "New Zealand" will target a price-point of US $849.

 

Source: http://www.techpowerup.com/163386/AMD-Radeon-HD-7990-Reference-Board-Pictured-Specs-Confirmed-in-GPU-Z-Screenshot.html

 

So much for counterattacking Nvidia GTX 680, now that the recent AMD Radeon graphic cards are all going PCI-express 3.0 whereas Nvidia has yet to.

Share this post


Link to post

@Malevolence

 

This card is longer than my whole computer xD

Share this post


Link to post

Its always overkill, but they're still pretty to look at. :P

Share this post


Link to post

Malevolence, you're back! :D

 

As for that latest GeForce video card, that's too heavy with metal and powerful too.

Share this post


Link to post

While GeForce 650 is about to be released around in October, here's the latest GeForce 660 just being released last month (and sure it comes with colors)

 

small_gtx-660-ti-16.JPG

 

MSI%20660%20Ti%20front.jpg

 

MSI%20660%20Ti%20power%20and%20SLI.jpg

 

 

Article 1: http://hothardware.c...Gigabyte-Zotac/

Article 2: http://www.bit-tech....ti-2gb-review/1

Article 3: http://www.techspot....rce-gtx-660-ti/

Article 4: http://www.tomshardw...eview,3279.html

 

At an average starting price of $300 USD, I don't think I could afford this... yet. The video cards may be starting to get more worthy, but the recent PC games aren't.

 

 

As for the GeForce 650, this is what it looks like:

 

MSI-Power-Edition.png

 

Source: http://wccftech.com/msi-geforce-gtx-650-power-edition-overclocked-leaks/

Edited by zocom7

Share this post


Link to post

Looks like the next generation of GeForce 700 series will be released sometime in 2013, with some of the ones already released on mobile PCs. But hold on, even the GeForce 800 series could be quickly be released sometime in 2014. (http://en.wikipedia.org/wiki/GeForce_800_Series)

 

Sources of news of the GeForce 700 series:

http://www.pcper.com/news/Graphics-Cards/NVIDIA-Rumored-Release-700-Series-GeForce-Cards-Computex-2013

and

http://wccftech.com/nvidia-geforce-700-series-specifications-detailed-gtx-780-based-gk110-gk114-power-gtx-760-ti/

and

http://www.fudzilla.com/home/item/31077-desktop-geforce-7-comes-by-computex

 

Only one problem though, I expect a "deep color" quality of 48-bit and/or 64-bit in the latest high-end graphic cards (Windows 7 already supports that). But only very expensive high-end video cards like the Nvidia Quadro and ATI/AMD FireGL have the option for this. HDMI 1.3 also supports "deep color" quality but only works on video cards and monitors/TVs that support "deep color" quality.

 

This reminds me to the fact today why the Frostbite 3 engine does not seem to have the "deep color" quality from BF4 when I first looked at the gameplay video. It just wasn't there.

Edited by zocom7

Share this post


Link to post

More info on the GeForce 700 series, including a possible future very high-end card to be sold for $999 USD!!

http://www.guru3d.com/news_story/nvidia_geforce_gtx_780gtx_770_and_gtx_760_ti.html

 

But too bad, that $999 card still cannot use a 48/64-bit "deep color" option in Windows/Mac OS. :(

Edited by zocom7

Share this post


Link to post

But too bad, that $999 card still cannot use a 48/64-bit "deep color" option in Windows/Mac OS. :(

There. Is. No. Need. For. It.

 

Didn't you even read the replies to the topic you started about this? :rolleyes:

Share this post


Link to post

Nvidia Geforce Titan vs. Nvidia Geforce 690:

http://www.pcmag.com/article2/0,2817,2415642,00.asp

 

Here's what the Nvidia GeForce Titan looks like:

titan_open_angle-view.png

 

geforce-gtx-titan-3qtr-1.png

 

Some people are saying that GeForce 690 is faster than the Titan due to its dual-core technology and performance. IMO, the Titan would have been named as GeForce 685. Quite odd why both of them each cost a pricey $999 USD, and they cost more than the AMD Radeon HD 7990.

Edited by zocom7

Share this post


Link to post

Here's a look of the new GeForce GTX 780:

nvidia_geforce_gtx_780-3qtr-100038716-la

 

Info: http://www.pcworld.com/article/2039556/nvidias-geforce-gtx-780-a-titan-for-the-rest-of-us.html

 

... and the GeForce GTX 770:

1856.jpg

They both look quite compared to the original GeForce Titan.

 

 

Brand samples such as...

PNY's version of the GeForce GTX 770:

GTX-770.jpg

 

the EVGA's version of the GeForce GTX 780:

780_2780_2785_650x418.png

 

and the classified version of EVGA's GeForce GTX 780:

780_3788_650x418.png

Info:

http://videocardz.com/42082/evga-launches-seven-geforce-gtx-780-cards-classified-superclocked-acx?utm_source=rss&utm_medium=rss&utm_campaign=evga-launches-seven-geforce-gtx-780-cards-classified-superclocked-acx

and

http://www.evga.com/articles/00746/

Edited by zocom7

Share this post


Link to post

Share this post


Link to post

I wonder why AMD had mentioned that Microsoft isn't going for DirectX 12 anytime soon (maybe the fall of PC gaming and the flop of Win8 isn't making future games any better and useful anymore):

http://www.hardwarecanucks.com/news/amd-roy-taylor-directx12/ and http://tech.slashdot.org/story/13/04/12/1847250/amd-says-there-will-be-no-directx-12-ever

 

 

With that in mind, here comes the new AMD Radeon Rx 200 series (not the HD 8000 and 9000 series) using their 28 nm technology with Direct X 11.2. Too bad Direct X 11.2 only works on Windows 8.1 (and Xbox One) for now. :(

Wiki info: https://en.wikipedia.org/wiki/Radeon_Rx_200_Series

 

An example would be this high-end AMD Radeon R9 290X (with 4GB GDDR5 VRAM and triple fans!):

Info: http://www.amd.com/us/press-releases/Pages/amd-radeon-r9-290x-2013oct24.aspx

Review: http://www.pcper.com/reviews/Graphics-Cards/Sapphire-Radeon-R9-290X-Tri-X-4GB-Graphics-Card-Review

Review 2: http://www.anandtech.com/show/7457/the-radeon-r9-290x-review

IMG_9130.JPG

Edited by zocom7

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×