

I also attempted to use Xcode 9.3.1 and still experienced issues with the Simulator (iOS 11 versions only) seemingly continue to lag behind the same way Xcode 10.0 did. One thing that did help for sure was deleting Xcode 10 altogether, installing Xcode 9.2 as well as its respective command line tools: Luckily this is probably the last iOS project I ever have to do with Xcode.įixed my issue by installing Xcode 9.2 on macOS Mojave Version 10.14ĭon't think Xcode 10 is ready to be all that usable right now (at least for me and for those who have been having a lag with their Xcode Simulators). I can't do anything else at the same time.

This is even worse than ever before because now the simulator hogs 100% of CPU time, making everything in the operating system lag while the simulator is running. With Xcode 10 there framerate will fluctuate, between about 10 fps and something like 2 fps, at random intervals, making it almost unusuable. In fact, it's worse now because with Xcode 9 they were slow, but at least the framerate was quite steady and consistent. The Xcode 9 simulators were already really slow (running at about 10 fps on this iMac), but now with Xcode 10 they have become even slower. By this point they have become so slow that they are almost unusable. With every new version of Xcode, the simulators have become increasing slower.
#LIFTOFF SIMULATOR RUNNING WAY SLOW FULL#
They could simulate in real-time, full 60fps, even on slower Macs. I'm running CARLA 0.9.6-29 on Ubuntu 18.I remember the time, many, many years ago, when the iOS simulators were actually extremely light, fast and efficient. By restricting GPU visibility, one can force CARLA to run on a specific GPU. gpus 'device=1' to force it to only expose GPU 1 to the container. You need to add the 'device=' prefix, e.g. Further, docker run -gpus 1 specifies only one GPU is visible, but it defaults to GPU index 0. The environment variable SDL_HINT_CUDA_DEVICE=1 is ignored inside the Docker container for reasons I've yet to determine. Binaries/Linux/CarlaUE4-Linux-Shipping 917MiB |ĭocker is slightly less intuitive.

Binaries/Linux/CarlaUE4-Linux-Shipping 917MiB | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. $ SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=3. $ SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=2. $ SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=1. $ SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=0. I asked over in the nvidia forums for a general solution to selecting which GPU an OpenGL process runs on: SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=5 NVIDIA_VISIBLE_DEVICES=5 CUDA_VISIBLE_DEVICES=5 workspace/home/carla/CarlaUE4/Binaries/Linux/CarlaUE4 CarlaUE4 -carla-server # run without singularity, still on device 0 (not surprising, SDL isn't installed on the host) SINGULARITYENV_SDL_VIDEODRIVER=offscreen SDL_VIDEODRIVER=offscreen SINGULARITYENV_SDL_HINT_CUDA_DEVICE=5 SDL_HINT_CUDA_DEVICE=5 SINGULARITYENV_NVIDIA_VISIBLE_DEVICES=5 NVIDIA_VISIBLE_DEVICES=5 SINGULARITYENV_CUDA_VISIBLE_DEVICES=5 CUDA_VISIBLE_DEVICES=5 singularity exec -nv -C -H $CARLA_WORKSPACE images/carla_0.9.5.sif CarlaUE4/Binaries/Linux/CarlaUE4 CarlaUE4 -carla-server

# on the offhand chance the variables must be on the host, in the container, and the entryscript is canceling them out. SINGULARITYENV_SDL_VIDEODRIVER=offscreen SINGULARITYENV_SDL_HINT_CUDA_DEVICE=5 SINGULARITYENV_NVIDIA_VISIBLE_DEVICES=5 SINGULARITYENV_CUDA_VISIBLE_DEVICES=5 singularity run -nv -C -H $CARLA_WORKSPACE images/carla_0.9.5.sif Singularity run -nv -C -H $CARLA_WORKSPACE images/carla_0.9.5.sif # Run standard way, runs on GPU 0, as expected Singularity exec images/carla_0.9.5.sif /bin/bash -c 'apt list -installed | grep sdl' Singularity exec -C -H $CARLA_WORKSPACE images/carla_0.9.5.sif /bin/bash -c 'cp -r /home/carla/*. # create a writeable home directory with the binary the way CARLA entryscript expectsĬARLA_WORKSPACE=`pwd`/workspace/home/carla Singularity pull docker://carlasim/carla:0.9.5
