Hi @vcarceler,
There are two services in docker related to the RabbitMQ messaging:
-
taiga-async-rabbitmq
to manage the asynchronous tasks in Taiga, like the email delivery or the importing project background processes.
-
taiga-events-rabbitmq
to manage the asynchronous user notifications, like the mentions in the comments or the description.
We have monitor the CPU usage for the two rabbimq services and by no means should be a constant value of 21,6%. Its usual value should be around 0.7-1% for both, and, ocasionally, having some punctual peaks of 20% (and just a matter of a second).
The first thing we would suggest is to verify the two involved pair of services are properly configured, reviewing the starting logs of taiga-events-rabbitmq/taiga-async-rabbitmq
and their consumers taiga-async/taiga-events
. They shouldn’t reflect any error and connect correctly to rabbitmq.
In order to have more information about the number of queues and the status of their messages, it could be a good idea to expose the internal management UI’s ports in the docker-compose.yml:
taiga-async-rabbitmq:
image: rabbitmq:3.8-management-alpine
ports:
- "15673:15672"
taiga-events-rabbitmq:
image: rabbitmq:3.8-management-alpine
ports:
- "15672:15672"
This would allow to access to http://<TAIGA_DOMAIN>:15672 and http://<TAIGA_DOMAIN>:15673 to monitor the two rabbitmq services.
You shouldn’t see any high number there (either for connections, queues or messages) as they can be the source of a high CPU usage (according to the link you provided).
If you prefer, you can get the number of queues as @Pablohn26 suggested by cli once connected to either of the two containers (taiga-docker-taiga-async-rabbitmq-1
or taiga-docker-taiga-events-rabbitmq-1
):
$ docker exec -it taiga-docker-taiga-async-rabbitmq-1 /bin/bash
bash-5.1# rabbitmqctl list_queues -p taiga
RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
Timeout: 60.0 seconds ...
Listing queues for vhost taiga ...
name messages
tasks 4
It would be also interesting to monitor the complete execution of an asyncronous task flow to detect any possible problem. You could take the importing project task for example:
-
export a project from Taiga using the menu Settings > Project > Export
(it will create an asynchronous task to send an email with the link to download the exported .json)
-
stop the taiga-async service stopped from docker
$ docker stop taiga-docker-taiga-async-1
-
Import the previous .json as a new project Project > New project > Import project > Taiga
(this should create a message in the task
queue in a ready
status as the consumer service is stopped)
-
Re-launch the Celery service to process the importing project task
$ docker start taiga-docker-taiga-async-1
-
Any queued messages should have consumed and it shouldn’t be any message in the “Ready” status.
One last thing to try could be to disable the rabbitmq management plugin itself, as sometimes it involves a high CPU usage.
$ docker exec -it taiga-docker-taiga-async-rabbitmq-1 /bin/bash
bash-5.1# rabbitmq-plugins disable rabbitmq_management
RABBITMQ_ERLANG_COOKIE env variable support is deprecated and will be REMOVED in a future version. Use the $HOME/.erlang.cookie file or the --erlang-cookie switch instead.
Disabling plugins on node rabbit@taiga-async-rabbitmq:
rabbitmq_management
The following plugins have been configured:
rabbitmq_management_agent
rabbitmq_prometheus
rabbitmq_web_dispatch
Applying plugin configuration to rabbit@taiga-async-rabbitmq...
The following plugins have been disabled:
rabbitmq_management
stopped 1 plugins.
We really hope this answer helps you, and please, keep us informed if you finally come to a solution or if you need more help.