Please don't do 3D like VMware!

Discussion in 'Parallels Desktop for Mac' started by hoju, Mar 2, 2007.

  1. hoju

    hoju Member

    Messages:
    27
    Perhaps you are right, that point keeps being made over and over. I can't speak to the low level effort involved - but I can't see how emulating each and every API on each and every OS is any less effort (and each version of each API).

    I still fail to see how the emulated approach is going to go out of date any faster then the roll-your-own card and driver approach - which is still going to have to keep pace with this mythical need to match graphics hardware advances. Or not. The latter case being more likely, and applicable to either scenario.

    I have been using VMs now for going on 5 years at least, and the SVGA drivers have hardly kept pace with anything... they are the same spec we were using in 1995 pretty much. I think adding 3D to that layer does not mean the intention is to play keep-up-with-the-GPU-joneses.
     
  2. Mathew Burrack

    Mathew Burrack Junior Member

    Messages:
    17
    If we were talking about DirectX, then yes you'd have the same catch-up problem. However, OpenGL (with the exception of 2.0) basically handles all upgrades through a single extension mechanism. Ideally, if you could trap the extension mechanism itself (just another WGL API call) and reroute it dynamically to the extensions supported on the host OS, then the OGL virtualization in Parallels would basically automagically upgrade to inherit whatever capabilities the host OS has, as it gets upgraded.

    As for the multitude of OSes to support: there's basically only three interfaces: WGL (for all the Windows variants), GLX for the linux variants and AGL for OS X, and the last one obviously isn't needed. Yeah, someone might have to generate unique drivers for each linux flavor using the same GLX code base, but given how such code already exists out in the Linux world, that shouldn't be *that* hard of a chore (I'd think, at least).

    So consider: implementing one API (or two really if you want Linux support), as basically a pass-through mechanism for two dozen or so API calls, which by nature will upgrade capability as the host upgrades, or implement an entire chipset emulation (NOT virtualization) which will immediately lock it to a single set of capabilities, be considerably slower and have at least a geometric increase in entry points, not to mention that many of those entry points will have to be trapped interrupt and register calls, not just simple API traps, and thus be inherently slower than the first method, on top of all the other disadvantages.

    Look, just somebody tell me how the guest-host OS bridge works in Parallels, and I'll go write a Glide driver and demonstrate what I'm talking about lol. Well ok, it's not *that* simple, but I *would* be willing to work on such a concept...

    --mcn
     
  3. drscience

    drscience Member

    Messages:
    30
    Technical speculations and brainstorming aside, I bear in mind that Hoju in his initial fanboy post was being deliberately provocative. However, it should be said that:

    1) Whether Parallels can come up with a better implementation of 3D than VMWare is far from clear. They have yet to come up with any implementation at all, and seem flummoxed by an effective USB2 or multi-core implementation, things which Fusion has had since its intial beta. Cut and paste is still broken, and Parallels cannot figure out how to implement global drag and drop without exposing the host's file system to Windows. None of this speaks to me of deep technical competance.

    2) It isn't a "lame execuse" to point out that Fusion's relatively slow performance is attributable in large part to debugging code. It's simple fact. From my perspective as a beta tester, the presence of this code is a testament to VMWare's far more rigorous and systematic approach to debugging. Perhaps this is why, even in its beta state, Fusion is more stable than Parallels, and no one has reported a single trashed host OS installation, tools installation failures, repeated BSOD's etc--even though it isn't even a shipping product.

    3) This emphasis on quality and reliability on VMWare's part, together with their own statements about their future development efforts in this area, suggests that their ultimate 3D implementation, whatever form it takes, will likely to be far less troublesome than that of Parallels.

    4) They have also stated that they are actively hiring experienced Mac developers, so if Hoju or anyone else cares to walk their talk, they have an opportunity to do so.

    I am not an employee of VMWare, do not speak for them, have not purchased any of their products, and use Parallels in lieu of Fusion every day, despite its peculiarities and utter lack of support, because Fusion is simply too slow in its current incarnation. But I have been treated far more respectfully and honorably as a potential customer and beta tester of VMWare than I have as an actual customer of Parallels.
     
  4. hoju

    hoju Member

    Messages:
    27
    Well said drscience!

    In the interest of full disclosure, I have to say I was a long time fan of VMware before the age of intel macs and parallels. They were the best, and for a long time ONLY, VM solution for Linux.

    But I am behind these parallels folks, I think they are doing a good job for a little shop. Their clean 2 file VM structure is so much nicer then the 30 file VMware layout.

    I didn't find Fusion to be as rewarding as drscience. Beyond being slow, I found a lot of little annoyances (like not having Fedora listed and having to go through 2 failed installations and scour the forum to find out it needs to be the 2.6 compat setting just to install). And there was no easy (keyword) way to import my VMs from Parallels (while parallels sucked up all my other VM formats par excellance).

    But VMwares choice to only deal with DirectX was the kicker. Gaming in XP in a VM on Mac would be great, sure, cause Mac games really suck - the ports are awful. You can't even apple-tab out of them to arrange a LAN hook up via chat, yeesh. They are a lousy value even if you torrent them down!

    But for those of us that want to run a full blown Linux desktop in a VM with all the OpenGL goodness, VMware 3D was just a big tease... and let down. I mean, why don't they call it DirectX Emulation, or Direct3D emulation... to call it 3D Virtualization is really false advertising.

    It seems to be all geared toward Aero, but who really cares about that abortion Vista? Especially if you are on a Mac. That seems to be what all the 3D hubub is about. Just the thought of installing it gives me a "no" feeling. What a waste of 3D energy :)

    I am suspect that debug code can really account for the dramatic sluggishness, but we shall see - I look forward to the release bakeoff between the two.
     
    Last edited: Mar 15, 2007
  5. tomservo291

    tomservo291 Member

    Messages:
    90
    Age old excuse? Debug logging on a large scale project is a HUGE bane on performance. If I turn on debug on even half of ta medium-sized application I currently am involved with at work it slows to to less then 50% performance, or worse. Logging suites generally have transactionality requirements to ensure proper logging (files must be properly written and verified.. basically the equivalent of closing and reopening a file descriptor everytime you write to the log.)

    That is no excuse. If debug logging is on, you can expect performance to blow.
     
  6. argh

    argh Junior Member

    Messages:
    14
    Seeing as I'm unlikely to be playing two games at once.... I'd be quite happy for Parallels to effectively steal the videocard from MacOSX :)

    Why not?

    It already does that for the CD drive, I don't know if you noticed, but when running parallels, the CD disappears from the desktop.

    Why not let parallels steal the videocard too?

    And then of course give it back when Parallels quits.

    It would probably one almighty hack, though. It would be easier if they petitioned Apple to support "GPU borrowing". I think it would be a good new feature to MacOSX :)
     
  7. rhind

    rhind Member

    Messages:
    84
    Why not just use BootCamp then?
     
  8. wesley

    wesley Pro

    Messages:
    396
    Probably because you have to shut down OS X altogether and reboot just to load a game. It's a big hassle - I'd want to have stuff running in the background while gaming so I can resume whatever I was doing later right after finishing the game. Dual booting is a necessary evil, and if concurrent running is possible, it'll almost always be preferred over it.
     
  9. rhind

    rhind Member

    Messages:
    84
    Yes but the graphics card is probably fairly fundamental to the OS, unlike a CD, and so 'stealing' it from OS X, even temporarily, would probably end up being similar to a reboot anyway.

    Russell
     
  10. mike3k

    mike3k Member

    Messages:
    65
    I want to be able to run Beryl in Ubuntu for visual effects. I don't play any Windows games so I couldn't care less about accelleration in Windows. I only use Windows to run Visual Studio and Outlook.
     
  11. Mathew Burrack

    Mathew Burrack Junior Member

    Messages:
    17
    Actually, it'd be pretty impossible to "steal" the gfx card from OS X. One of the primary architectural decisions of OS X as compared to OSes like Windows (minus Vista, which has started to go down the route of OS X) was to separate the application graphics layer from the graphics card itself, so that the only way to draw graphics was to effectively go through the OS and "play nice" with other apps. The downside is slightly worse performance for single apps like games, but multi-app effects like Expose or Quartz Extreme you get essentially for free, or at least much easier. It also means that a single misbehaving app can't render the entire graphics subsystem unusable.*

    Now, this doesn't mean Parallels could support fullscreen games--it could just as much as OS X allows fullscreen games. It just means that it'd have to do so the "official" way, through OS X, and that giving full control of the video card to the guest OS is basically straight out of the question.

    * For the technophiles out there: yes, this is a *gross* oversimplification. But, IMHO, a fair and accurate one, and at least points out the difference in philosophies and why the above suggestion wouldn't work.

    --mcn
     
  12. limec

    limec Member

    Messages:
    29
    There's reason for VMware to have so many files. VMware's VM files are cross-platform, meaning it can be used on ext2, fat, etc. fat has 2GB per file limit, thus the many files.
     
  13. hoju

    hoju Member

    Messages:
    27
    nope, that is merely an option, not the reason for the file number. and parrallels is also x-plat... FAT sucks.
     
  14. limec

    limec Member

    Messages:
    29
    You don't get it. If you copy your gigantic VM to Windows 98 running on fat, and try to run it, can you do it? With multi 2GB-sized files, this is still possible. Everyone KNOWS fat sucks.
     
  15. MeStinkBAD

    MeStinkBAD Bit poster

    Messages:
    1
    FAT has a 4GB filesize limit. FAT16 and earlier couldn't use more than 2GB for diskspace.

    As for hardware 3D acceleration...

    VirtualPC V4 or something included a plugin that allowed the VM to use an installed 3DFX VooDoo 2, with the Glide 3D API. Glide was a very simple, low level API that only ran in fullscreen in 16bit color. 3DFX's GPUs were far simpler than today's cards. It should also be noted that this was also possible because pre-OSX and Win9X both allowed applications direct hardware access. The NT and Darwin kernels don't really allow this.

    In order for a guest OS to access hardware directly, it must have dedicated access. Which means the host OS will have to release it's hardware access first. Then the guest OS could use the hardware. The result would be much like boot camp... except slower and bugier etc.

    Personally, I'd choose software emulation of 3D hardware since it would be the most reliable and behaive correctly. Running the 3D hardware emulation on a seperate dedicated core would be the best approach. You could actually render stuff as fast as first generation DX8 cards. Using software emulation with 3D hardware support (i.e. D3D -> OpenGL translation) would yield slightly better performance, and look better. Mainly look better. I don't expect fast 3D rendering for a Guest OS. I expect 3D acceleration to work!
     
  16. chrisj303

    chrisj303 Member

    Messages:
    72
    3D? Whats that? - I'm still waiting for bloody parallels Tools for Linux. Coherence? Ha! - I'm still waiting for full screen:eek:


    But seriously, i think a lot of parallels users gagging for 3D graphic support should maybe step back a little and look at the bigger picture.
    I mean, is it REALLY essential to be able to play doom3 while running osx SIDE-BY-SIDE for example. Anybody running parallels can also run bootcamp, and i think in the majority of cases is probably a better option. It dosen't take long to re-boot and will always bring better performance.
     
    Last edited: Apr 2, 2007
  17. Resuna

    Resuna Member

    Messages:
    54
    There are only two systems to implement this in: Windows and non-X11 UNIX (you don't need to do anything for X11, Apple already provides an accelerated X11), which comes down to Windows and SDL. And you have the source to SDL, so writing OpenGL stubs is easy... that only leaves Windows.

    And you'll need to implement an OpenGL API for Windows Vista anyway because Vista doesn't use native OpenGL... all the video card manufacturers have to implement their own OpenGL. So there's no downside for OpenGL... and you shouldn't even care about DirectX since it'll have to take the hit from being translated to OpenGL anyway... might as well concentrate on making the apps that CAN be fast fast.
     
  18. Resuna

    Resuna Member

    Messages:
    54
    that would be even MORE work, since you'd have to implement a driver for each and every video card. You can't just say "give me the GPU" and be done with it... you have to re-initialise it to a state where the Windows drivers can use it, then restore the state to something OS X is happy with when you're done, and you'll need to write separate code to do this for each card and chipset, plus for the GMA950 you'll need to reverse-engineer the way the integrated graphics is handled by OS X and intercept the Windows driver's attempts to stop all over physical memory...

    No. Not a good idea.
     
  19. Resuna

    Resuna Member

    Messages:
    54
    Who cares? Anyone running Windows 98 outside a point-of-sale terminal or other semi-embedded environment that doesn't get upgraded should be getting danger money anyway. Anyone installing new software on a Windows 98 box shouldn't be allowed to play with scissors.
     

Share This Page