I've seen several projects where the build rewrites some part of the code to generate a custom binary for each target environment. This always seems to make things much more complicated than they need be, and introduces a risk that the team may not have consistent versions on each box. At a minimum it involves building multiple, near-identical copies of the software which then have to be deployed to the right places. It means more moving parts than necessary, which means more opportunities to make a mistake. I've worked on a team where every change had to be checked in for a full build cycle, which meant that our testers were left idle whenever we made minor setting adjustments. I've worked on a team where the system administrators insisted on rebuilding from scratch for production, which meant that we had no proof that the version in production was the one that had been through testing. And so on.
The rule is quite simple. Build a single binary that you can identify and promote through all the stages in the release pipeline. Hold environment-specific details "in the environment", for example in the container settings, in a known file, or in the path. Of course there are exceptions, say for targets that have extreme resource constraints, but they don't apply to the majority of us who are writing database to screen and back again systems. If you do have a code-mangling build, then it suggests that either the team hasn't thought through its design and figured out where the moving parts are or, worse, the team has but cannot follow through.