Page 1 of 1
PhysX with multi-gpu mode
Posted: Sun Feb 08, 2009 1:16 pm
by ReadyMan
Krom mentioned that an old gpu could be used in a new system for PhysX.
I hunted around and found this
http://www.guru3d.com/article/physx-by-nvidia-review/1
It's an article which states that it's true, I can use my new GTX 260 for graphics and install my old 8800GT as a Physx unit...not in sli mode.
There's a couple of caveats:
in the Multi GPU mode, it's actually a Vista limitation but a second monitor must be attached to enable PhysX running on the second GeForce GPU. You must extend your Windows Vista desktop onto that monitor. This limitation does not exist in Windows XP.
The article was written 6 months ago,so I'm wondering if this still will work. Gonna fish around some more before popping in my 8800.
Is anyone doing this at the moment?
Interestingly, if you have integrated graphics, like
with a GeForce 8200 chipset integrated mainboard you could run PhysX over the mainboard's integrated GPU, and just add a dedicated GPU (graphics card) for gaming...
Hope this information helps someone.
RM
Posted: Sun Feb 08, 2009 1:38 pm
by Admiral LSD
I've yet to convinced that PhysX, whether it's provided by nVidia GPUs or those hilariously overpriced and underwhelming AGEIA PPUs, is anything other than hype. Wake me up when the OMG PHYSX!! games actually start appearing. And by that, I don't means games that make just use of it, I mean games that actually do something with it that can't realistically be provided by any other means.
Posted: Sun Feb 08, 2009 1:47 pm
by ReadyMan
I've been reading all morning about a dedicated PhysX card...seems that while you *can* put in a 8600 geforce or higher, if you have a decent video card (200 series) that runs physX, you wont see much if any difference.
I'm seeing 2 arguments a lot:
There arent enough games out yet to warrant a dedicated physX card yet (mirrors edge and UT3 xpack being two of the main ones out).
and a decent single card solution will do physX and video just fine...kind of like SLI...
However, if you have an old 8800gt sitting around (like me), then why not put it in? My concern is power (I have a 750w, which *should* be enough), and compatibility issues within windows itself with the drivers (this is just inexperience, cause I dont know what to expect. seems like you just plug it in, plug in the power, add the physX drivers from nvidia, go to the nvidia panel and enable one card as your primary and one as you physX dedicated card).
Has anyone tried this yet?
---edit---stupid question: do I need to dl the PhysX system software in addition to the current drivers for my 260?
Posted: Sun Feb 08, 2009 1:58 pm
by Admiral LSD
Here's the thing: An 8600GT does 113 GFLOPs while an 8800GT does a little under 3 times that at 336. To put that into some kind of perspective, the Cray-1 supercomputer only did 136 MFLOPS, a whole order of magnitude below what todays GPUs are capable of and cost USD$8.86 million in 1976. Even the most complex physics effects in games are unlikely to need that kind of processing power which means your 8800GT will be sitting there sucking juice while remaining woefully under-utilised. The other problem is that with todays CPUs you have anywhere up to 3 cores that could easily be roped in to perform these physics calculations but basically lay fallow while you're playing games. It's a situation that's only going to get worse as we see more and more cores get added. Intel and AMD are already planning 8 core chips while Intel have demonstrated a couple of 100+ core research chips.
Posted: Sun Feb 08, 2009 2:54 pm
by Krom
Yes but what that 8800 GT does, it does better than even a 8 core Intel or AMD processor could keep up with. According to Intel
the i7 920 puts out 42 GFLOPs, compare to your figure for the 8800 GT at 336 GFLOPs. The problem isn't the amount of power the card has to push PhysX processing with, the problem is the amount of software that actually uses it. The reason there are no meaningful uses for PhysX currently is because the amount of acceleration PhysX brings to the table means it is impossible for the CPU to perform the same job. No game developer in their right mind would commit to relying on a feature that is all but guaranteed to break compatibility and support on better than half the systems out there.
Now if Nvidia and AMD got together and cross licensed PhysX so it would work on both Nvidia and AMD GPUs, that would greatly accelerate the uptake of it. Otherwise software developers just won't support something that will only be available in a small portion of the market, it wouldn't make sense. And even if AMD got on board today, it would still take software developers another 12-18 months before we would start to see titles that really rely on PhysX.
Re:
Posted: Sun Feb 08, 2009 4:18 pm
by Admiral LSD
Krom wrote:Yes but what that 8800 GT does, it does better than even a 8 core Intel or AMD processor could keep up with. According to Intel
the i7 920 puts out 42 GFLOPs, compare to your figure for the 8800 GT at 336 GFLOPs. The problem isn't the amount of power the card has to push PhysX processing with, the problem is the amount of software that actually uses it. The reason there are no meaningful uses for PhysX currently is because the amount of acceleration PhysX brings to the table means it is impossible for the CPU to perform the same job.
That kind of misses the point though. What can 336 GFLOPS of potential PhysX processing realistically bring to the table that 42 GFLOPs of dedicated CPU cores (that aren't being used for much of anything at the moment anyway) possibly can't that would offset the 200W or so cost in having the card in the system? I suspect not much. AGEIAs own PhysX PPU was a spectacular failure because they failed convince people why it was necessary. Now we have nVidia trying to do the same thing with more power on tap but with even less reason to buy into it. It all reeks of nVidia wanting to sell higher-priced GPUs to segments of the population that don't really need it, just as it did with SLi in the past. Until they deliver a PhysX game that not only does stuff that can't possibly be done without it but also does it without looking like a stupid "because we can" tech demo, I refuse to believe it's anything more than just hype.
Posted: Sun Feb 08, 2009 5:04 pm
by Krom
That's just a lack of imagination on your part, not a problem with the concept or the hardware.
It can bring more movable objects, more detail, and more realism to games. Think the \"cinematic physics\" in Half-Life 2: Episode 2, except calculated on the fly dynamically depending on where the bridge or building was shot from, which way the wind was blowing, and any number of other factors the developers want. The so called \"cinematic physics\" they used are just that: cinematic, something that always plays the same way every time. You might be able to view it from any angle, but the events themselves are scripted in a painfully obvious manner.
Assuming they have enough processing power they could make a game where virtually everything that sticks out of the ground could be destroyed or knocked over. No more completely impassible, unmovable and unclimbable wooden fences/locked doors/automobiles/etc blocking your path. No more tin sided shacks the size of a closet that are able to repel the impact of a car/truck/tank/freight train like it was as light as a feather.
Posted: Sun Feb 08, 2009 5:13 pm
by Admiral LSD
All of that wouldn't need to be happening all at once, all the time though. The engine would only have to deal with the smaller subset the player is currently dealing with. Break it down into smaller chunks like that and suddenly you don't need anywhere near as much raw processing power to achieve it.
Posted: Sun Feb 08, 2009 7:38 pm
by Krom
Physics processing is pretty easy to divide up and do in parallel, but its not an issue of how easy it is to break up and do in smaller chunks. The issue is modern processors simply aren't fast enough to do it in real time period and won't be any time soon. Either you are giving modern processors too much credit, or you are taking the amount of calculations required too lightly. It's not like it becomes any more likely that in one minute only four people could move a ten ton rock a hundred yards uphill just because you converted it to ten tons of sand instead.
The player isn't just going to sit there and wait while the game pauses saying: \"Please wait 5 minutes while the physics for this train wreck are calculated.\".
Posted: Sun Feb 08, 2009 7:55 pm
by Admiral LSD
I just don't see how in-game physics processing needs the several hundred GFLOPS of processing power nVidia are now telling us it does in a single spot at a single point in time without making things look like an over the top tech demo - ultimately distracting from the overall gameplay experience it's trying to enhance. I'll believe it when I see it, when nVidia can irrefutably prove beyond any kind of doubt that PhysX enables things that just aren't possible without it (and that actually enhance the overall gameplay experience). AGEIA couldn't do it and went bust trying, I honestly don't see nVidia doing any better.
Posted: Mon Feb 09, 2009 10:25 am
by Warlock
It matters a lot when it comes down to that. Hell in Max for a 9 block wall getting hit with a ball takes a bit to process all the data but when I port it to my gpu to process it takes 2 seconds for 180 frames vs 3min on CPU
But I don't get the big rave over this cause iirc Redfaction did enviroments that blow up and fall apart and there actiong like distructable enviroments is the coolest thing and its all shiny and new. For the time RF came out that was some bad ass tech but never hit it off big
Posted: Mon Feb 09, 2009 11:40 pm
by ReadyMan
The PhysX demos are great! For now I think my 260 handles things just fine. No need to add in the 8800 (need being the operative word).
When new games are available, I'll install the 8800 (keeping it here in case of the 260 dying). Seems a waste, but good to have it. (I gave my 7800 along with my x2 4400/ram/mb to the friend who helped me install the new mb/cpu.)
Still trying to figure out the best settings for the new video card, but am enjoying the results. Wish I'd made this jump months ago, instead of going with the 8800.
Posted: Fri Feb 13, 2009 11:19 am
by TOR_LordRaven
Why not compare them head to head? Get an ATI system and play Mirrors Edge, then play it on a nVidia system with PhysX - see which actaully looks more real.
as for RedFaction - their \"GeoMod\" engine was pretty cool, but it wasn't real physics. in fact, if you noticed, you could shoot a wall 4 or 5 times in the same spot with an RPG - it will \"Loop\" 4 or 5 different... Models of Crumbled Rock over and over.
You will break through to the other side but its was not real Physics and I don't think that was their intent anyway. They just wanted to give you the ability to blast through a wall which was and still is cool.
I think nVidia had a Mouse-Trap/Pingpong ball demo to demonstrate PhysX at one point - i highly doubt that any CPU could run that simple test on their own with the same fluid result.