VSCode Setup for Allen Development
Working with the Allen
LHCb Trigger framework can be painful if your editor (e.g. VSCode because of remote development setup) doesn’t understand where anything is, especially with all the custom toolchains
, CUDA
, and standalone configs. I’ve been using VSCode for most of my work, and over time I figured out a setup that makes development smooth. It provides working CMake integration, no bogus IntelliSense
errors, and full symbol navigation.
Here’s what I do to keep things clean and working.
Extensions I Use
I stick to just two core extensions:
- CMake Tools – handles all the heavy lifting (configure/build/run).
- C/C++ IntelliSense – gives proper autocompletion, go-to-definition, and good diagnostics (most of the time).
Sometime ago I tried to explore different extensions for CUDA
support specifically, but I found that the combination of CMake Tools
and C/C++ IntelliSense
works best for me. And to be honest I don’t remember why I stick to this setup and didn’t include any of the CUDA
extensions. But I’m not going through this journey of exploring options again anytime soon.
But lets return to the topic. These two extensions are enough when paired with a proper CMake preset and compile_commands.json
.
The Preset: Defining My Build Once
Instead of retyping long CMake commands, I use CMakePresets.json
in the repo root. Here’s what mine looks like for building Allen in standalone mode with CUDA
(Default GPU mode option) and cuDNN
:
{
"version": 3,
"configurePresets": [
{
"name": "gpu",
"displayName": "Standalone GPU Build",
"generator": "Unix Makefiles",
"binaryDir": "${sourceDir}/buildgpu",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release",
"STANDALONE": "ON",
"TARGET_DEVICE": "CUDA",
"CMAKE_EXPORT_COMPILE_COMMANDS": "ON",
"CMAKE_TOOLCHAIN_FILE": "/cvmfs/lhcb.cern.ch/lib/lhcb/lcg-toolchains/LCG_106c/x86_64_v3-el9-gcc13+cuda12_4-opt+g.cmake",
"CUDNN_INCLUDE_DIR": "/home/melashri/local/cuda/include",
"CUDNN_LIBRARY": "/home/melashri/local/cuda/lib64/libcudnn.so"
}
}
]
}
This have some hardcoded paths for CuDNN currently because I didn’t merge the PR for it yet. So it is not part of LCG toolchain. But one can even replace it with custom LCG toolchain if needed and include everything they need in it.
We can have multiple presets for different build configurations (e.g. CPU, GPU.) and switch between them easily.
{
"version": 3,
"configurePresets": [
{
"name": "gpu",
"displayName": "Standalone GPU Build",
"generator": "Unix Makefiles",
"binaryDir": "${sourceDir}/buildgpu",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release",
"STANDALONE": "ON",
"TARGET_DEVICE": "CUDA",
"CMAKE_EXPORT_COMPILE_COMMANDS": "ON",
"CMAKE_TOOLCHAIN_FILE": "/cvmfs/lhcb.cern.ch/lib/lhcb/lcg-toolchains/LCG_106c/x86_64_v3-el9-gcc13+cuda12_4-opt+g.cmake",
"CUDNN_INCLUDE_DIR": "/home/melashri/local/cuda/include",
"CUDNN_LIBRARY": "/home/melashri/local/cuda/lib64/libcudnn.so"
}
},
{
"name": "cpu",
"displayName": "Standalone CPU Build",
"generator": "Unix Makefiles",
"binaryDir": "${sourceDir}/buildcpu",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release",
"STANDALONE": "ON",
"TARGET_DEVICE": "CPU",
"CMAKE_EXPORT_COMPILE_COMMANDS": "ON",
"CMAKE_TOOLCHAIN_FILE": "/cvmfs/lhcb.cern.ch/lib/lhcb/lcg-toolchains/LCG_106c/x86_64_v3-el9-gcc13-opt+g.cmake"
}
}
],
"buildPresets": [
{
"name": "gpu",
"configurePreset": "gpu"
},
{
"name": "cpu",
"configurePreset": "cpu"
}
]
}
Anyway, now I don’t need to remember long CMake flags anymore. In VSCode we can do the following:
- Run
CMake: Select Configure Preset → gpu
- Then
CMake: Configure
, and you’re ready to build
Make IntelliSense Stop Complaining
Once the build is configured, CMake will generate a compile_commands.json
in the build directory. I symlink it to the project root:
ln -sf buildgpu/compile_commands.json compile_commands.json
Then I make sure VSCode knows where to look by editing .vscode/settings.json
:
{
"C_Cpp.default.compileCommands": "${workspaceFolder}/compile_commands.json",
"C_Cpp.default.configurationProvider": "ms-vscode.cmake-tools",
"C_Cpp.intelliSenseEngine": "default",
"C_Cpp.intelliSenseEngineFallback": "Disabled",
"cmake.sourceDirectory": "${workspaceFolder}",
"cmake.buildDirectory": "${workspaceFolder}/buildgpu"
}
This solves 95% of the “missing include” or “unknown symbol” false errors.
Notes
- The
compile_commands.json
has everything IntelliSense needs: real compiler flags, include paths (including CUDA and Boost), macros, etc. - I don’t need to manually fiddle with
c_cpp_properties.json
, that’s a thing of the past once the compilation database is in place. - Using
CMakePresets.json
also makes it easier to switch betweencpu
andgpu
builds without rewriting commands or editing shell scripts.
This setup has made working on Allen much less frustrating for me (at least for the build and finding simple errors). I can jump between kernels, headers, and host code smoothly, and the tooling mostly gets out of the way. Now I have to deal with subtle errors that are not caught by IntelliSense, but that’s a different story.