I've read that Parallels Desktop supports eGPUs now. Does anyone know if this is an option for machine learning development? I'd like to do TensorFlow development on my MacBook Pro 16" in a Linux virtual machine that's making use of an Nvidia card connected as an eGPU. Is this possible? Thanks!
Parallels virtual machines cannot directly access eGPU. So it is not possible to use it within a virtual machine.