Before you set me aflame, please take a while to read.
Starting a Docker container takes time. This slows down your workflow, consumes developer time and ultimately costs the business money.
Developers are incentivize to create more containers. In my experience, this leads to greater memory consumption and worse performance, as in-memory caches, database management systems and such are created seperately for each (micro)-service as the developers of the service just want to get their task done quickly instead of thinking about the footprint of the system as a whole. If you are doing a microservice architecture, and having a container for each microservice, everyone doing their own thing and rolling their chose database for their container can result in problems, when the different services have to communicate.
Building a docker image and downloading images from dockerhub takes a long time on slower connections. Even if you are downloading a container image in 10 seconds, think about people in other countries. Have a little imagination, that not everyone has a great network connection. You may need to work with people remotely. Some Developers may be coding from a mobile hotspot. Do you want to pay their data plan? I guess not.
Docker cannot replace a good sys admin. Docker can never be a sysadmin. You still need a sysadmin to configure the packages and versions to be used in your docker containers. A good sysadmin will probably try to keep the same version of a software on all the different servers that the company is running. That way, there is only 1 version to track for security issues and patch notes and upgrade procedures need to be done only once for all the servers. If you have many different versions in many docker containers, they all slowly get outdated, and their security vulnerabilities affect your projects.
Docker's Philosophy is to containerize and isolate Applications.
To have a .yaml file, specifying the environment and the versions of the different packages against which
the software is run.
With the environment being so easy to change, some developers will probably make images
with all kinds of weird software in different versions.
So the initial Problem of having different versions only gets worse over time, with more containers, more packages and more different versions.
If everyone is encouraged to put their application in the environment they choose,
using some dumb language like NodeJS or Python suddenly is no problem anymore.
Everyone can keep their little package zoo in their docker container.
This way, the developers in your organizations may use whatever they are personally most familiar with. If they leave, you better find someone who knows that particular set of technologies to maintain his applications!
The only thing that can MAYBE hope to +500% Developer Productivity IMHO is
a good IDE and autocomplete for statically typed languages.
I personally experienced -90% in Productivity once by using Docker on my (older, low-spec) netbook. Because my coworkers then had the not so smart idea to put the RDMBS into a container and start it in a container with a docker compose file to run alongside the application. It would have been better for me, if we had just configured some server via ssh, instead of starting and stopping and rebuilding images and running the database on our dev machines. You only need one development database. You do not need one per dev. That's just wasteful of computing resources.
Well, you might get more frequency, but your deployments will definitely take longer because of docker container startup time.