Tweak3D - Your Freakin' Tweakin' Source!
Tweak3D GeForce Chat Log

The Chat

If you missed our GeForce chat with NVIDIA, don't worry, you can join in next time. For now, you can read through the log of our chat session and see what everyone had to say. There was a Q&A section where we let the 300 (or so) participants ask questions regarding the new GeForce 256 GPU. The questions were answered by Nick Triantos, OpenGL guru for NVIDIA, as well as Christopher Donahue, the manager of developer relations. Additional support was provided by Paul (no last name specified) and Kenneth Hurley, both from NVIDIA. Also, towards the end of the chat, we had another guest stop by... Pyronious (Patrick) from The Whole Experience.

Before I get to the log, I want to send out a massive thank you to everyone who helped us out and participated, especially NVIDIA, Greg at Riva Extreme, and the whole staff that operates #Riva on Efnet. Thanks guys!

Note: If you haven't read our GeForce 256 Preview yet, check it out here. Now, on to the chat log!

The Log

Here it is, slightly edited for your viewing pleasure:

<Nick Triantos> Hi all
<greg[RE]> Question for Kenneth Hurley: <SNG_HaLo> Will the GeForce support non-square non-power-of-2 textures?
<Kenneth Hurley> I'll let Nick answer that one.
<Nick Triantos> Uhh, which one? About non-power-of-2?
<[T3D]Dan> Wait a second, can we explain the rules first?
<Kenneth Hurley> Non-Square for surre.
<Avatar-> nvidia: yes, can it do rectangular textures?
<Nick Triantos> Lemme first explain a few rules...
<Nick Triantos> I will talk about things that have been announced. I will not talk about things that have not been announced.
<Nick Triantos> Hang on, lemme change my name.
<Nick Triantos> ok, so who's next?
<greg[RE]> here. <FPS3D-Dan> What type of bump mapping does the GeForce support?
<Chris Donahue> The best kind - real geometry...
<MicroDooD> <mojomofo> will there be a pci version?
<Nick Triantos> It's up to our board customers to decide if they want to make a PCI version.
<MrBread> <Leviat4n> Will the NV10 (or GeForce) be CPU independent? That is, will I get the same kind of performance on a P2-400 as I would on a P3-550?
<Nick Triantos> Regarding bump mapping, we can support some really cool techniques that people haven't been able to do before.
<Chris Donahue> You will always get better performance with faster CPU's
<greg[RE]> here's one that I've been wondering too.....<Avatar-> How many megatexels can the card do?
<greg[RE]> <Avatar-> Can it do 2 textures per pipeline like the TNT could?
<[T3D]Sang> What is the minimum CPU recommended for the GeForce256?
<Nick Triantos> Sorry T3DSang, all I can say is that we have a new QuadPipe that is significantly bigger/better than previous generations of parts.
<Nick Triantos> The reason that is, by the way, is that games also do lots of other things that use the CPU.
<Chris Donahue> Minimum CPU = 266mhz
<MicroDooD> nvidia guys, can you answer questions on Price?
<MicroDooD> seems to be on everyones mind
<Chris Donahue> Nope, that is up the OEM's
<Nick Triantos> Our customers decide the price -- we don't.
<MicroDooD> <Fulg0re-> can u pls ask him about the fillrate...does the gForce256 do quad texturing?
<Chris Donahue> The priicing will be announced shortly - check Creative and Guillemot and all the other OEM sites!
<Nick Triantos> There is one for sale on EBay, though... :-)
<MicroDooD> heh, what's not for sale on ebay? :)
<Kenneth Hurley> EBay:
<MicroDooD> <DigitalOn> Q: What are the clock speeds of the core and memory on the GeForce?
<Nick Triantos> Clock speeds are decided by our OEM customers, not by us.
<Chris Donahue> We do recommend, but it up them in the end.
<Razlak> but there will only be one part, not speed binned this time correct?
<MrBread> [21:10] *rhink* my question is, it appears that the clock speed of the GeForce is only 120 mhz (480 mp/sec, divided by 4 pixels/clock=120 mhz). This seems like a step down... why isn't the clock speed higher? Will it be easy to overclock beyond that?
<MicroDooD> <shabby> ask him what the max texture size of the geforce is
<Nick Triantos> Again, the speeds are decided by the OEMs, not by us.
<Nick Triantos> As for max texture size, it's 2048x2048, same as TNT.
<Chris Donahue> And - being a responsible chip maker, we do not recommend over-clocking
<Nick Triantos> :-)
<Kenneth Hurley> ;-))
<MicroDooD> <Woofer_> How has the developer reaction been to hardware lighting so far?
<Chris Donahue> It's been great - we just finished our US dev conference and the attendees were blown away
<MrBread> Wag_* ask about the Janus HDTV integration they promised way back in April
<Chris Donahue> They can't wait to get their hands on the GeForce256
<Nick Triantos> Next?
<MrBread> *JR* Q:Any reason for the lowish sounding fillrate improvement (compared to 'other' announced boards?)
<Chris Donahue> This is the next logical step in scene complexity
<Kenneth Hurley> In fact, I've been getting calls asking for them from developers.
<MicroDooD> <Otana> Will the GeForce be able to breath new life into older systems? with it taking all of the graphics workload, could play a game, say in the league of Homeworld on something in the p-150 to p-200 range?
<Nick Triantos> First of all, I don't know if P-150 or P-200's have AGP, and the initial GeForce boards will likely be AGP...
<Nick Triantos> Second, as I said before, there's really a lot more to a game than just the graphics, and those other parts of the software will also need fast CPUs.
<MrBread> *Wag_* Is anyone planning to do an AIW type GeForce board with Janus DTV tuner?
<Chris Donahue> We actually have not done any testing with non-AGP systems as yet.
<Chris Donahue> We have done a reference design of a board that has a tuner - it again will be up to an OEM to implement the design.
<greg[RE]> I have a question from SNG_HaLo: He wants to know if textures are still being "swizzled": He's heard of some develelopers complaining about slow upload times due to conversion to an 'internal format'.
<Nick Triantos> Next?
<MicroDooD> <LaRz17> Will drivers for multiple operating systems be released at the same time?
<Nick Triantos> Greg, regarding textures...
<Nick Triantos> The texture downloading has been heavily optimized on GeForce.
<Razlak> so no repeat of the Tribes fiasco?
<MrBread> *Avatar-* How many megatexels can the card do?
<Nick Triantos> As for driver releases, I think NVIDIA is planning to release all drivers at once.
<Nick Triantos> As for Tribes, Lemme go on record again:
<Nick Triantos> The "Tribes problem" was problems on both the Tribes side and on the driver side. The driver side was a bug. The Tribes side was a minor-reimplementation of their texture engine.
<Nick Triantos> Next?
<Razlak> will the geforce be about to do Full scene anti alasing or something of the sort without much of a hit in speed?
<Nick Triantos> Full scene AA in any system will cost performance. If you don't believe that, you're being misled by some marketing folks. :-)
<Razlak> requires doubling the resolution, of course
<Nick Triantos> mostly, Razlak is right. Depends on your AA technique.
<Nick Triantos> next?
<MrBread> *Avatar-* How many megatexels can the card do?
<IBMosher> <Alfman> What is the maximum ram it will support, what type of ram, and what different configurations will it be offered in?
<[Jar]> <|warlock|> the maximum poly rate quoted is huge - and it's going to generate quite a bit of traffic on the AGP bus. How is this going to affect fillrate when games use lots of polys?
<Chris Donahue> The numbers we announced today were 480 Megapixels
<MicroDooD> <Species84> Do games need to be re-written to take advantage of geo excel?
<Nick Triantos> Jar - Good question. The GeForce has been optimized to handle more compact data than any other card that I know of. It should be able to rock, even with large amounts of data.
<Chris Donahue> The maximum memory configs will again be up to the OEM's - but we did not wimp out on the ability for GeForce to support HUGE amounts of RAM
<Nick Triantos> Microdood, all OpenGL apps that use the OpenGL pipeline will automatically get xform/lighting acceleration. D3D apps will need a somewhat minor port to DX7.
<Razlak> any elaboration on that #?
<Razlak> for ram
<Chris Donahue> we will leave it up to the OEM's to announce their final configs
<Nick Triantos> hehe no problem.
<Nick Triantos> Next?
<greg[RE]> from Daniel@fps3d: is there any optimizations for multi-threaded applications in the geFORCE drivers?
<Razlak> <jemhadar1> any plans using the chip in a arcade system with a non x86 cpu
<Chris Donahue> We always are interested in other applications of our products - but there are no current products that I am aware of that are not X86 based.
<Nick Triantos> Daniel - Depends on the title. Some poorly written apps can perform WORSE when multithreaded. Better apps, however, will benefit if there are multiple CPUs and your OS supports multi-CPU (such as WinNT). On those apps, we do see performance improvements, in some cases, very nice ones.
<Chris Donahue> We are the official graphics sponsor of the Sony Metreon Center in San Francisco :)
<greg[RE]> Nick: from me. Any comment on Q3 and r_smp 1? :)
<[Jar]2> <Swish> when to developers get to test out the boards with their games?
<Nick Triantos> r_smp 1 was initially tested on NVIDIA drivers, as far as I know... ;-) And yes, it does help when you have multiple CPUs.
<Nick Triantos> Hang on, chris is typing furiously. :-)
<[T3D]Dan> hah
<Chris Donahue> Jar: they are already testing them out - there will be several apps that are optimized for GeForce announced soon
<MrBread> *CausticPu* How about the heat generation? Are they recommending heatsink/fan combos for the OEMS?
<Chris Donahue> Wait till you see the news from ECTS!
<greg[RE]> OK, for everyone joining just now, send questions to myself, [T3D]Dan, MicroDooD, MrBread, or IBMosher. NOT to the nvda guys.
<Nick Triantos> Again, I think it depends on the OEMs, how they choose to clock the chip/card, etc.
<Nick Triantos> OK Next?
<MicroDooD> <DigitalOn> Will OEMs have the ability to add "Component Out" for Rear Projections TV for the HDTV support in the card?
<Nick Triantos> Hmm, sure, I guess they could, with an external encoder. They wouldn't lose any chroma bandwidth if they did it right.
<Nick Triantos> So, on my projector at home, I just use direct VGA into the projector.
<Nick Triantos> ok next?
<Nick Triantos> :-)
<[T3D]Dan> Here's one Felby asked the guys when we were at NVIDIA, but I figured some people were interested: <Skier[HH]> Will the GeForce take more watts then the TNT2 Ultra?
<Chris Donahue> Power consumption is about the same as TNT2
<Nick Triantos> next?
<Razlak> <HamsterHu> what part of the PC Architecture will be the biggest lag on the GeForce? basically what spec machine would be GPU limited?
<Nick Triantos> Hamster - I think that getting data to the card is the toughest task we're trying to overcome (with features like AGP4X fast writes). In other words, I believe AGP bus will be the biggest bottleneck.
<Nick Triantos> Next?
<[Jar]2> <orlock> WIll they still be supporting Xfree86/Mesa3D/glx/linux/etc like they have in the past?
<Nick Triantos> Yes.
<Nick Triantos> Next?
<IBMosher> <_R0M> does it render-to-texture, and if all the interpolants are perspective-correct.
<Nick Triantos> All interpolants have been perspective correct since TNT (even since RIVA 128, except for alpha). And yes, GeForce can render to a texture.
<Kenneth Hurley> BTW, nick _R0M is from Matrox
<Nick Triantos> That's ok, he's welcome to listen in. Keeps him from shipping products when he spends time here. :-)
<MrBread> hehe :)
<Chris Donahue> Might learn something to :)
<MrBread> stara: the TNT was given to developers in a lunchbox, will the geforce also be given in another lunchbox? and if so, what colours are available?
<[T3D]Dan> hahaha stara
<Chris Donahue> Nope, we have a totally new and totally cool new dev box to give away :)
<Nick Triantos> Next?
<IBMosher> <[T3D]xero> how will clock speed affect poly count capability?
<Nick Triantos> A faster clock will definitely enable higher polygon counts, but you have to understand that the poly counts are already significantly higher than anything else the PC has ever seen.
<Nick Triantos> Next?
<Razlak> <Gldm> yes, will they use very expensive sub 5ns SDRAM, even more expensive RDRAM, or try to use a cheaper DDR or interleaved SDRAM system?
<Chris Donahue> Once again, we will leave it up to our OEM's to determine the clock speeds.
<Chris Donahue> and RAM types
<Nick Triantos> Next?
<greg[RE]> OK guys, I'm getting quite a few questions about "SLI" or parallel type configurations. Is it a possibility?
<Chris Donahue> SOme might even be using LMNOPRAM
<greg[RE]> haha
<Nick Triantos> Hang on, Chris is slow typist. :-)
<Chris Donahue> No current plans - again, up to the OEM's :) (love that answer!)
<MrBread> hehe sheesh
<Nick Triantos> Next?
<[T3D]Dan> <DDDgamer> can you tell me about dvd support. can i trash my hardware dvd card?
<Nick Triantos> The DVD decode is much improved on GeForce. But it will run great with software or hardware DVD.
<Chris Donahue> next?
<IBMosher> <DevoX> A variation of an earlier question: Will the GeForce drivers themselves have any optimizations for multi-CPU systems? (ie: is it possible, etc)
<MrBread> <Skier[HH]> Plz ask nvidia if the GeForce will have anything like the G400's dualhead feature, thanks
<Nick Triantos> DevoX, Actually, the OpenGL driver already does have some optimizations for multi-CPU systems, but that's something we're still working on...
<Chris Donahue> No current plans for a dual-head implementation.
<Nick Triantos> Next?
<[Jar]> <Greenbean> is the GeForce256 as an excellent card for running high end 3d application software and do you feel you're letting previous tnt2 customers down by releasing another card so soon?
<Chris Donahue> What? Next support? oh, sorry :)
<Nick Triantos> Greenbean - Both are great cards. TNT2 kicks major booty, and GeForce will kick more major booty. :-)
<Nick Triantos> And yes, GeForce will run high-end 3D very well.
<Nick Triantos> Next?
<[T3D]Dan> <[T3D]sine> are there any plans to use a .18 micron die size?
<Chris Donahue> We try and synchronize our releases with system OEM's schedules - thus the accelerated release schedule
<Nick Triantos> GeForce will be shipping before .18 micron is really commercially viable.
<Nick Triantos> Eventually, anything's possible.
<Nick Triantos> Next?
<greg[RE]> <Fulg0re-> does the geFORCE256 have an accumulation buffer? also, does it support all the GDI+ operations?
<Chris Donahue> But we will be moving to .18 as soon as it is commercially feasible
<IBMosher> <Kaii> What about rebate/trade-in programs for owners of TNT/TNT2?
<Nick Triantos> Fulg0re, no, we do not have hardware accelerated accum buffer in GeForce. I'm not sure about GDI+, sorry. I'm a 3D guy.
<Chris Donahue> That would be up the OEM's...
<Chris Donahue> What part of GDI+ is interesting to you?
<Razlak> <DFo3D> Has nVidia taken a look at 3D-RAM5? Is it fast enough for geforce?
<Nick Triantos> Many of the uses you would have for an accum buffer or other weird buffer (:-)) is doable without that support, though.
<Nick Triantos> DFo3D - Sorry, I'm a software dude, don't know about that type of ram.
<Chris Donahue> We will have an example of motion-blur soon that will blow you away :)
<Nick Triantos> Next?
<[Jar]> <orlock> Is it true that the new SGI x86 machines will be using nvidia 3d processors?
<Nick Triantos> I don't think SGI has announced anything about their future products. Sorry.
<Chris Donahue> Next?
<[T3D]Dan> mader_ and others want to know what kind of performance hit to expect when using 32-bit color, as opposed to 16-bit color.
<Nick Triantos> Good question. On GeForce, there is definitely much less of a hit in 32-bpp than in, say, a TNT. However, in some apps, especially high-res, there are still some performance hits.
<Nick Triantos> Next?
<Nick Triantos> Depends on the app, though...
<MrBread> <HiVolt> can u ask them what's the core clock speed they're gonna be recommending to OEM's?
<Chris Donahue> On some of the testing we have done we have seen no difference between 16 & 32 bit
<Chris Donahue> Once again, we will leave it up to our OEM's to determine the clock speeds.
<Chris Donahue> Next?
<greg[RE]> Do you plan on implementing clock sliders in the ref drivers? (from Burn69)
<Nick Triantos> I don't think that you can get drivers Microsoft-certified with overclocking controls in them.
<Nick Triantos> In other words, most likely, we won't put that in our reference drivers, but I'm not really sure to be honest.
<Nick Triantos> Next?
<Razlak> <Gldm> How's the GeForce do when paired with an Athlon so far?
<Chris Donahue> All - when you get time check out the WXP website (www.WXP3d.COM) they have some screen shots of their game running on a GeForce256 - it looks amazing!
<Nick Triantos> GeForce kicks butt on an Athlon. That's a really fast system, and GeForce is a really fast graphics card.
<Chris Donahue> It kicks ass! We will be running all Athlons at ECTS and Summer Camp UK next week!
<Nick Triantos> Next?
<Razlak> so no obvious problems on the AMD chipset?
<[Jar]> <FPS3D-Dan> How much of a performance increase will we see when increasing the amount of memory ie from 32 to 64 to 128?
<Nick Triantos> No, Athlon is the fastest computer I've seen, period. I'm still waiting for them to send me one for home. :-)
<Chris Donahue> Nope, we have been working real closely with AMD to ensure not just compatibility but kick butt performance on all of their CPU's!
<[T3D]Dan> <NetGuruFL> what are the plans if any for alt. os support for the geforce, such as linux and beos
<Nick Triantos> FPS3D-Dan - Depends a lot on what game you're talking about. Something like Quake3 will use up to 33MB of textures in just one level, but the largest resolutions will need about 24MB. That should fit well in a 64MB system. Other games will need more or less.
<Nick Triantos> We are definitely planning ot support other OSes.
<Chris Donahue> We answered that earlier - we will support all the OS's we now support
<Nick Triantos> Sorry "to" support...
<Chris Donahue> Next?
<greg[RE]> <Chrizz> Will Windows 2000 drivers be available for use with Release candidate versions (before the final Win2k release)?
<Chris Donahue> Yes, we will have full support for Windows 2000
<Chris Donahue> on the release versions!
<Nick Triantos> Next?
<Razlak> ok.. this may touch a soft spot..
<Nick Triantos> Go for it. :-)
<Razlak> <DFo3D> I read somewhere that the XGL200 author Scott Cutler got a summer job at nVidia, did he serve donuts, or was there some involvement with the fact he reverse engineered Glide?
<Nick Triantos> DFo3D - Scott does currently work here, yes (as an intern). He's super-bright, and he's working on D3D for GeForce now.
<Chris Donahue> Hey Nick, we're out of donuts...
<Nick Triantos> He's not doing anything at NVIDIA that has to do with his wrapper.
<Nick Triantos> Next?
<MrBread> <[T3D]xero> will agpx4 be a nessecity to get the full preformance of the card?
<Nick Triantos> Depends a LOT on the app.
<Nick Triantos> Some apps are very AGP dependent, others are not so. On the ones that are, yes, AGP4X makes a big diff.
<Nick Triantos> Next?
<MicroDooD> <Marudek> will GeFORCE still support VBE3.0?
<Nick Triantos> Vesa 3.0?
<MicroDooD> I think thats what he means
<MicroDooD> if it will be in the BIOS or not
<Nick Triantos> Sorry, I don't know what VBE 3.0 is, but yes, we should still continue to support Vesa 3.0.
<Chris Donahue> Next?
<Nick Triantos> (Unless something newer and better comes along)
<[Jar]> Vexxed> will it support vertex blending for multiple weighted matrices used in skeletal animations?
<Nick Triantos> Yes, we support it in hardware.
<Nick Triantos> Next?
<[T3D]Dan> Chrizz> Regarding BeOS support, will NVIDIA eventually write their own drivers for it, or will they keep making Be write their own?
<Nick Triantos> Not sure, probably depends on how much BeOS takes off.
<Chris Donahue> Next?
<greg[RE]> <|ssss|> ask him if he has tested with powerpc g4
<greg[RE]> i'm wondering that too.
<Nick Triantos> Not yet, I'm a PC guy now.
<Nick Triantos> Sorry, I really haven't even touched a Mac since 95. Next?
<Razlak> Redwood(of asks, when will we be able to get benchmarks on real games? or review samples...
<Nick Triantos> Chris will answer.
<Chris Donahue> Soon - there will be reviews released very soon :)
<[Jar]> King_Dari> does NVIDIA have any plans to segment the GeForce processors like the TNT2 (vanilla, Ultra)?
<Chris Donahue> Next?
<Chris Donahue> We plan on leveraging our technology as widely as possible - this allows us to maximize our R&D investment
<Nick Triantos> Next?
<[T3D]Dan> I've had a few questions asking what you guys think of T-Buffer. Do you consider it over-hyped? How does T&L compare?
<Nick Triantos> Hang on, Chris is typing. :-)
<Chris Donahue> There are an infinite numbers of problems to resolve in 3D hardware - developers have told us that they want more complex characters and environments and that special effects are not that important
<Nick Triantos> Next?
<MrBread> <rhink> will the GeForce support true Trilinear filtering, not the "fake" approximated trilinear of the TNTx series?
<Chris Donahue> Therefore we chose to implement a feature that people can use make use of and show a dramatic difference in their apps
<Nick Triantos> Yes. And TNTx also supports true trilinear.
<Nick Triantos> There are times when we use dithered trilinear for better performance in multitextured games, though.
<MicroDooD> true trilinear as oppsoed to 'fake' trilinear?
<Nick Triantos> On GeForce, trilinear is never dithered.
<MicroDooD> oh, ok
<MicroDooD> <_wraith> will there be any chance of Parreleing multiple GPU's on the same board?
<Nick Triantos> TNT also supported a texture filter that looks like true trilinear though.
<Chris Donahue> No current plans... but that is up to the OEM :)
<Chris Donahue> NExt?
<[Jar]> <Vastator-> I was wondering, how can blend work (requirment of presorted polys) without any way to do this with OpenGL (at least 1.1).
<Nick Triantos> Vastator, sorry, I don't understand your question. If an app wants correct blending, they must render objects back to front.
<greg[RE]> ready?
<Chris Donahue> Sure...
<Nick Triantos> OpenGL requires drivers to render objects in the order an app asks for them.
<greg[RE]> <Fulg0re-> Will there be a performance hit if separate per pixel perspective corrected difuse and specular color are enabled?
<Chris Donahue> No, that has been fixed... (which company asked that?)
<Razlak> <jemhadar1> is it possible that with the geforce having smp threaded drivers, being able to play lets say a game of unreal in win2000, be able to take advantage of dual cpus even though the game is not optimized for it
<Nick Triantos> It's something we're working on in the drivers, but really, the best performance will be had only from apps that are architected to use the multiple CPUs... (more coming)
<Nick Triantos> Unreal's a good example. The game was originally not written for hardware acceleration, so it always fell a bit behind things like Quake. Tim Sweeney has really improved it lately, though, by rewriting more of the "core" engine.
<Nick Triantos> Next?
<[Jar]> Siliputti> are you active in getting developers to support the Dot product bump mapping. Why did you choose that over environmental bump mapping?
<Chris Donahue> Absolutely - but we would prefer them to use actual geometry - bump mapping is just a poor substitute for real polys :)
<Nick Triantos> Dot product bump mapping kicks booty. Wait til you see it, but Chris is right, real geometry will always look better.
<Nick Triantos> Next?
<greg[RE]> <[dksuiko]> ask them if there will be a performance variance when using the GeFORCE on a Celeron 300A to a Pentium3 - 600 running a program that has only graphics (given that it completely supports GPU) and has no AI, physics, etc. for the CPU to calculate.
<Nick Triantos> Again, depends on (1) the app, and (2) the motherboard. Assuming all other things are equal, the app can perform equally well on both CPU.
<Nick Triantos> Next?
<MicroDooD> <Toretzu> Regarding the cube env. mapping: are these generated in hardware or must they be provided by the application?
<Nick Triantos> The cube maps themself can either be app-supplied or hw-rendered.
<Nick Triantos> Next?
<Razlak> a nice light one
<Razlak> <Postal_> How do you pronounce "GeForce?" Gee force? Gehforce? :)
<Nick Triantos> "G" Force!
<Nick Triantos> Next? :-)
<[Jar]> <MfA> Will the non windows drivers be open source? (ie not run through the pre-processor)
<Chris Donahue> btw - developers are hyped about T&L - but they are just as excited about cube maps - there a ton of cool tricks they can do with them.
<Nick Triantos> What would you want with open source drivers, by the way?
<Nick Triantos> I'm not sure what our plans will be regarding that.
<Nick Triantos> Next?
<MicroDooD> Chris Donahue: I have a friend who wants to write some OS/2 drivers for TNT
<Nick Triantos> Tell him to send a resume to me.
<Nick Triantos> Next?
<[T3D]Dan> We knew about this one! ... =) <phorensic> are the GeForce256s being sold on e-bay and other online auctions legit, or are people faking selling the card?
<Chris Donahue> Nope, we are selling some boards on Ebay - the proceeds will go to charity
<Nick Triantos> Chris is answering...
<Nick Triantos> Next?
<MicroDooD> <L|ghtning> Q: all those transistors / clockspeed must generate a rather large amount of heat... what cooling specs are you recommending to your OEM's - active cooling like asus did with their TNT boards, or large passive heatsinks like the voodoo3 had - or possibly something different still?
<Nick Triantos> NP. It's up to the OEMs. Some will likely choose higher clocks and a fan, while others may choose lower clocks and just a heatsink.
<Chris Donahue> We will be seiing them below the artic circle :)
<Razlak> <Computing> (from Will the drivers for Geforce be a unified driver that works on all TNT/TNT2/Geforce cards?
<Nick Triantos> Yes, one driver, all cards. Kick ass.
<Razlak> good job
<Nick Triantos> Next?
<Nick Triantos> Thx.
<MrBread> hehe
<Nick Triantos> Virge is here? Didn't Virge die a long time ago? :-)
<Nick Triantos> OK, next?
<[Jar]> <Maxim> will geFORCE256 support hardware alpha blending in directdraw (not direct3d)
<Nick Triantos> I'm not sure about that, sorry.
<Nick Triantos> Next?
<MrBread> <|warlock|> damn fine chip on paper, but it should consume a lot of power -- any potential problems with motherboards?
<Nick Triantos> There's some motherboards that don't supply anywhere near the power they are required to. Those motherboards will have problems with many current and future cards from many vendors.
<Nick Triantos> Next?
<MicroDooD> <Maxim> ask if geforce256 will support hardware alpha blending in directdraw (not direct3d)
<Nick Triantos> Maxim, sorry, don't know.
<Razlak> what does GeForce 256 mean anyway?
<Chris Donahue> Ge=Geometry Force=kick ass
<Nick Triantos> next?
<MicroDooD> <forums> Can you ask them how they are dealing with bandwith contstraints?Glaze3D is using Embedded ram will coneventional ram be enough for Geforce?
<IBMosher> <RickR> Do you have any details on if and when you will support the xFree DRI?
<Nick Triantos> Lemme say this... I've seen some amazing performance out of GeForce, and there are some other products out there, no names mentioned, that sound great on paper, but aren't real until they're real. We ship products that you can benchmark.
<Nick Triantos> RickR, sorry, we haven't announced anything about that.
<Chris Donahue> Next?
<MicroDooD> <Toretzu> What's the speed inpact of using hw-rendered cube env. maps?
<Nick Triantos> None, in most apps. Very small probably, in the worst case.
<Nick Triantos> Next?
<MrBread> <mikewarri> Can you ask if the GeForce is Dual or Quad TExturing?
<Nick Triantos> Quad texturing.
<Nick Triantos> Sorry...
<Chris Donahue> Check for the full spec :)
<Nick Triantos> It has a four-pixel pipeline, and can render 2 textures.
<Nick Triantos> Next?
<IBMosher> <Locke2> does the GeForce256 support ANY t-buffer-esque features?
<Nick Triantos> There's a cool app called T-Bluffer. You can download it from Runs great on TNT and beyond, shows motion blur, full-scene AA, and depth of cue.
<Chris Donahue> Next?
<Nick Triantos> Sorry, depth cue.
<Nick Triantos> Next?
<Razlak> <Orenthal> What is the number of instructions issued per clock by nv10's gpu relative to the number of texels per pipeline.
<Nick Triantos> Orenthal HUH?
<Chris Donahue> Many.
<Chris Donahue> Next?
<[T3D]Dan> <Greenbean> Mentioned in the articles are 8 "free" specular lights, what happens if say 16 are present in a scene, does it become mindnumblingly slow?
<Razlak> heh
<Nick Triantos> Through both D3D and OpenGL, we announce that we only support 8 lights. An app cannot request more.
<Nick Triantos> BTW, 8 is a standard used even on the highest end 3D workstations that cost more than $100,000.
<Nick Triantos> Next?
<Chris Donahue> Developers are going to have to balance the hardware and software lighting needs of their apps to get maximum performance.
<greg[RE]> OK everyone, now we have Patrick from WXP on. They're making the game "Experience", which utterly blew my mind. His nick is Pyronious.
<Chris Donahue> Hi Patrick!
<greg[RE]> So questions about WXP are cool too.
<Nick Triantos> Hey Patrick!
<Razlak> <Vortexo> how do they answer to the public who buy nvda's stuff the moment they come out only to find a newer/faster thing to be released pretty shortly?
<Nick Triantos> Yeah, for all of you who haven't checked it out, you MUST go see their web site: Totally damn cool!
<Pyronious> Hello!
<MicroDooD> hello Pyronious
<Pyronious> We'll be prioritizing our lights based on distance from the affected polygon to get "more than 8 lights" on the GeForce256.
<Chris Donahue> This is a problem with all technology - same with CPU's, TV's stereos, 8-tracks... :)
<MicroDooD> ready for another?
<Chris Donahue> Gotta buy sometime - if you wait you miss the fun.
<Razlak> I don't think we want them to slow down... :P
<Nick Triantos> Sure.
<MicroDooD> <Lepper> How much of a performance penalty will there be to implement full scene anti-aliasing? Is it similar to the penalty in current Nvidia solutions?
<Nick Triantos> Full-scene AA will always cost performance on everyone's hardware, regardless of what their marketing folks tell you. It requires a lot of reading and writing to memory.
<Nick Triantos> Check out T-bluffer.exe, though, you can see that it kicks butt if done right. T-bluffer is a small OpenGL app that does many cool tricks.
<Nick Triantos> Next?
<MicroDooD> <Lepper> What is the fill rate of the GeForce for games that only support single texturing or dual texturing, respectively?
<Chris Donahue> we ahve stated a rate of 480 megapixels...
<Chris Donahue> oops ahve=have
<Nick Triantos> Next?
<IBMosher> <j0e-> are there any nv10 based cards coming out with video capturing/editing capabilities? like an all in one agp solution with good capturing and tv viewing capabilities.. cause currently there arent many good competitors on the market
<Chris Donahue> In light of past OEM implementations - I would guess you will see vid-cap and other cool thingies
<Nick Triantos> Next?
<Razlak> <malone> what are your thoughts on the consolidation of board and chip makers, does nvidia have any such plans for the future?
<Nick Triantos> Hang on, Chris typing...
<Chris Donahue> No, our business model is working VERY well - no need to change things. We do like the clearing of the field, it makes domination easier :)
<Pyronious> Just to give you a "real-world" scenario, our GeForce256 demo pushes over 80,000 polygons per frame, with all the bells and whistles turned on.
<Chris Donahue> Next?
<Chris Donahue> Yea baby :)
<greg[RE]> <[dksuiko]> can they FORCE full-scene anti-aliasing in current games?
<Chris Donahue> we got your polys right here :)
<greg[RE]> (with raw, unadulterated power :)
<Nick Triantos> We removed support for full-scene AA from our most current drivers. It causes too many bugs in some titles that wanted to use it.
<Chris Donahue> Next?
<Razlak> <DFo3D> ok : The A3D 2.0 engine relies on geometry data moving between CPU and video board in order to perform wave tracing, does nVidia have a solution with Aureal to add full functionality to A3D 2.0 or will wavetracing simply be gone with a GPU instead of a 3D accelerator?
<Nick Triantos> It may still be there in OEM drivers though.
<Nick Triantos> Huh? A3D does not use any 3D graphics facilities, as far as I know... They might store objects in the scene, but they have their own 3D engine.
<Nick Triantos> To be honest, I don't know, though.
<Nick Triantos> Next?
<Razlak> thx anyway
<MicroDooD> <arielb> question: can the x86 CPU also be used for t&l or does the geforce take this job completely?
<Razlak> and will that also be load balanced?
<Chris Donahue> I've gotta go - leaving for the UK tomorrow... gotta do laundry and stuff... thanks everyone!
<Nick Triantos> The X86 *can* still be used for T&L, but it doesn't seem to be able to keep up with GeForce.
<Pyronious> Later Chris
<Nick Triantos> Next?
<SoupBone> later
<greg[RE]> Bye Chris! Thanks
<Razlak> <_R0M> do they support dot3 in DX6 or DX7 and also in GL?
<MicroDooD> see ya Chris
<Chris Donahue> Later people! Demand your GeForce256!
<MicroDooD> thanks for coming!
<Nick Triantos> _R0M, you jealous? :-) Yes, by the way.
<Razlak> <Postal_> Will nVidia release videos of their tech demos? Pretty please?
<[Jar]> heh
<IBMosher> o man
<MicroDooD> I'm sure everyone is very glad ya came :)
<[T3D]Dan> haha
<Nick Triantos> Postal, we'll most likely post something, according to Chris.
<Nick Triantos> OK guys, any final couple of questions?
<Pyronious> And be sure to buy your GeForce from an OEM that packs in our demo!
<IBMosher> <HamsterHu> what will be the bit depth of the Z-buffer? and how many fog modes will it support?
<Nick Triantos> Hamster, we support 16-bpp and 24-bpp Z/W, fixed and float. And we support all D3D/OpenGL fog modes.
<Nick Triantos> Next?
<[T3D]Dan> <Bart> Does the card produce a lot of heat? and what kind of cooling system will be used? Any thoughts on the new system the Althon is using?
<Nick Triantos> Bart, sorry, as Chris has said, it's up to the OEMs to decide how to manage heat on their cards. Some might use heatsinks, some might use heatsink + fan.
<Nick Triantos> Don't know what Athlon is doing, but mine has a honkin cool heatsink on it. :-)
<Nick Triantos> Next?
<MicroDooD> <Lepper> Does the GeForce support environment bump mapping (the kind used in Expendable, etc..)?
<Nick Triantos> Lepper, GeForce does Not support the DX6 bumpmapping.
<Nick Triantos> Next?
<MicroDooD> <Gemfightr> to Nick Triantos I'd like to know if the Geforce will work with shutter glasses ?
<Nick Triantos> That's something else that's up to each OEM as they design the board. In other words, sorry, but I don't know.
<Nick Triantos> Next?
<greg[RE]> From Tommy McClain of D3D fame : Does the GeForce drivers support load balancing between the GPU and the CPU like 3Dlabs' PowerThreads and S3's drivers for their Savage2000?
<Nick Triantos> Tommy, we are working on it. Some of it is there, but there's still more room for improvement.
<Nick Triantos> BTW, gotten several private tells asking about whether current games can run on GeForce. Yes, they can. Many OpenGL ones will use the xform/lighting engine, and all will use the fast rasterization.
<Nick Triantos> Next?
<MicroDooD> <NeoTomba> I'd like to know id the GeForce performs better on the Athlon or the P3.
<Nick Triantos> That also depends on the app, but in general, it seems that Athlon is faster at the benchmarks I've run.
<Nick Triantos> Next?
<Nick Triantos> Let's take 5 more questions.
<IBMosher> <Adversary> it was mentioned the card will be able to do weighted matrix blends, is this true, and if so, how many matrices? if not, can it do tweening?
<MicroDooD> ok, 5 more folks
<Nick Triantos> Adversary, yes, GeForce supports transforming all vertices by 2 matrices, and interpolating between them. That will allow cool developers to implement hardware skinning.
<Nick Triantos> Next?
<Razlak> <Chewie> have they made any dealings with adobe or other CAD program designers about using the GPU instead of the cpu's FPU to render things in less time
<Nick Triantos> Chewie, many of the image processing filters that Adobe, etc use are very oriented toward CPUs, but we have a very good working relationship.
<Nick Triantos> Next?
<greg[RE]> from R0M: can you name some examples of games (not just developers) supporting Cube Environment Mapping?
<Nick Triantos> I can't talk about games that haven't been announced yet, sorry.
<Nick Triantos> Next?
<Pyronious> We are supporting the cubic environment mapping in Experience, and you will be able to see it in the Dagoth Moor Zoological Gardens demo for GeForce.
<Nick Triantos> R0M, you seem really jealous. If you like us so much, send me a resume.
<Nick Triantos> There's one game that's announced support. :-)
<Nick Triantos> Next?
<greg[RE]> i'll give him your email nick. haha
<Nick Triantos>
<MicroDooD> <[T3D]xero> will the athlon's superior bus bandwidth benefit the geforce in a major way?
<Kenneth Hurley> BTW pyronious, great looking new build.
<MicroDooD> thats all folks
<Pyronious> Thanks, Ken. It's really looking killer on your board.
<Nick Triantos> Again, it really depends on the app. In many apps, yes, an Athlon's bus will benefit performance.
<MicroDooD> thank you so very much Ken & Nick
<Razlak> a big thanks to Nick, Ken, Chris and anyone else who showed up
<greg[RE]> The logs should be up on and
<[T3D]Dan> Thanks to everyone that came out, especially NVIDIA. The chat will be put on logged, later.
<Nick Triantos> Hey, final plug I have to make. Check out AMAZING demo. It'll explain a lot about GeForce when you see their screenshots.
<greg[RE]> do it
<greg[RE]> it's awesome
<SoupBone> heh
<Nick Triantos> Next (one more question)
<[T3D]Dan> Yes WXP kicks ass
<SoupBone> nice shots
<Pyronious> Thanks!
<Pyronious> Make sure you buy your GeForce from an OEM that packs in our DEMO, it ROCKS!
<Nick Triantos> OK, last question?
<Nick Triantos> (Last for me, at least)
<MicroDooD> thanks guys! I'm sure I speak for everyone in saying that its been great having you here to answer our questions
<rec> (mws) what is the threshold where the transactions become more card bound?
<MicroDooD> <JR> tell them to upgrade thir damn web server! =)
<[T3D]Dan> haha
<greg[RE]> go home and play everquest, nick. :)
<Bullhonky> Heya Nick, I've been here, just got a discon :P
<Bullhonky> Hmm Nick plays EQ? :)
<SoupBone> EQ = netcrack :P
<Pyronious> I know our server is hosed rightr now due to all the traffic from the GeForce announcement.
<Nick Triantos> Someone asked about when a card becomes more bus bound. It's impossible to say, really depends too much on the app.
<[T3D]Dan> So is my server. argh
<Nick Triantos> And yes, I'm off to pay some bills, kiss the wife hello, and play some EQ.
<MicroDooD> kew kew
<Nick Triantos> OK all, thanks much, it's off to killing frogloks for me now.
<bizychild> later Nick!!
<greg[RE]> thanks nick
<BullhonkE> Hey Nick what server do you play on? :)
<bizychild> Bull is an EQ god :)
<Nick Triantos> I play on Cazic-Thule, but I'm not saying any more than that. And no, I'm not even close to EQ god. Only lvl 23. :-)
<BullhonkE> Pfft, heheh
<Nick Triantos> OK, g'nite all.

  • News
  • Forums
  • Tweaks
  • Articles
  • Reviews