Cancel
Showing results for 
Search instead for 
Did you mean: 

Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Creator
Creator

Hello,

Over the past few weeks I have been setting up a python app that is decomposed in two parts.

  1. a very easy flask web app that displays a home page and sends long running tasks to
  2. a celery worker that runs these tasks in the background

In order to do so I have followed instructions from the flask documentation: http://flask.pocoo.org/docs/1.0/patterns/celery/#run-a-worker

I now have the following 6 files in my project from which i push the app

main.py

tasks.py

startWorker.py

Procfile

manifest.yml

requirements.txt

 

The two above instances communicate over the message broker rabbitMQ. While the app works perfectly on my local machine, I had to do the following in order to run on Mindsphere (according to the cloudFoundry documentation)

  • set up the rabbitMQ service on my tenant
  • tied this service to my app
  • retrieved the rabbitMQ credentials, host and port to construct a CELERY_BROKER_URL = amqp://usernameSmiley Tongueassword@hostSmiley Tongueort which I configure in instances 1 and 2 of my app

Now when pushing my app I specify my two processes using a file called Procfile with the following content

web: python main.py
worker: python startWorker.py

 

startWorker.py executes the following command:

os.system('celery worker -A main.celery -E -P eventlet --loglevel=info -b %s'%CELERY_BROKER_URL)
This starts the celery worker, binds it to the celery instance inside main.py and 
uppon pushing the app, cloudfoundry log outputs the follwing 
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT  -------------- celery@7c548404-9bab-472a-55da-8892 v4.3.0 (rhubarb)
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT ---- **** -----
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT --- * ***  * -- Linux-4.15.0-47-generic-x86_64-with-debian-buster-sid 2019-06-24 13:02:56
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT -- * - **** ---
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT - ** ---------- [config]
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT - ** ---------- .> app:         default:0x7f67602f1978 (.default.Loader)
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT - ** ---------- .> transport:   amqp://username:**@rad42ac65.service.dc1.a9ssvc:5672//
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT - ** ---------- .> results:     disabled://
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT - *** --- * --- .> concurrency: 8 (eventlet)
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT -- ******* ---- .> task events: ON
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT --- ***** -----
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT  -------------- [queues]
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT                 .> celery           exchange=celery(direct) key=celery
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT [tasks]
   2019-06-24T15:02:56.06+0200 [APP/PROC/WORKER/0] OUT   . main.launchFoulingDetection 
2019-06-24T15:02:56.12+0200 [APP/PROC/WORKER/0] ERR [2019-06-24 13:02:56,129: INFO/MainProcess] mingle: searching for neighbors
2019-06-24T15:02:57.16+0200 [APP/PROC/WORKER/0] ERR [2019-06-24 13:02:57,168: INFO/MainProcess] mingle: all alone
2019-06-24T15:02:57.19+0200 [APP/PROC/WORKER/0] ERR [2019-06-24 13:02:57,193: INFO/MainProcess] celery@7c548404-9bab-472a-55da-8892 ready.
2019-06-24T15:02:57.20+0200 [APP/PROC/WORKER/0] ERR [2019-06-24 13:02:57,205: INFO/MainProcess] pidbox: Connected to amqp://a9s-brk-usr-0eadaa83349c8a42430102c427d6281b2ad75af7:**@rad42ac65.service.dc1.a9ssvc:5672//.
It thus seems that the worker has started and that the celery task 'launchFoulingDetection' included in my web app main.py is recognised.
 
The task is then sent to the worker using the following command
launchFoulingDetection.delay()

where launchFoulingDetection is a function inside main.py and .delay() is a the way for celery to wrap the task into a message that rabbitMQ can send to the worker (more information here https://docs.celeryproject.org/en/latest/userguide/calling.html).

 

On my local machine, upon sending the task to the worker, the worker log would show something like this 

[2019-06-24 15:06:27,390: INFO/MainProcess] Received task: flask_celery_local_test.launchFoulingDetection[7ee3e7e5-45f5-46d9-91aa-ba8e03bb14bb]
[2019-06-24 15:06:27,391: WARNING/MainProcess] fouling detection happening now
[2019-06-24 15:06:37,387: WARNING/MainProcess] fouling detection ended
[2019-06-24 15:06:37,387: INFO/MainProcess] Task flask_celery_local_test.launchFoulingDetection[7ee3e7e5-45f5-46d9-91aa-ba8e03bb14bb] succeeded in 10.0s: None

But the cloudfoundry logs do not show anything. The task seems to vanish into thin air.

I have found the following post which suggests a deadlock between the worker and rabbitMQ, but the post is from 2012 and it doesn't provide a solution to the problem:

https://stackoverflow.com/questions/11088477/how-can-i-communicate-with-celery-on-cloud-foundry

 

I am therefore looking for suggestions that could help me solve my problem, further things that need to be configured for the tasks to be able to reach the worker...

I hope my question is understandable, I tried to keep it concise by not copy pasting all of my files into it, but please ask for further information if needed.


Best

Theo

8 REPLIES 8

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Is this a background app? Have you followed the mindsphere guideline how to configure the background worker so that it runs?

1) background app:

 

https://developer.mindsphere.io/paas/paas-cloudfoundry-howtos.html#background-tasks-processes

 

 

You should also use the healtcheck type process in your  manifest.yml

 

2) healtcheck type process

 

https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Creator
Creator

I have stumbled accross these pages of the documentation before but they do not apply to my situation for two reasons:

 

1. When defining --no-route in the manifest.yml, it applies to both the worker and the web process. Without a route, the web process can't display a web page when I call its URL. I need the the UI/webpage as it will eventually be used to configure certain parameters of the worker process.

 

2. I think that the documentation you pointed at in your pont 1) doesn't refer to worker processes (which are inherently background processes on which no healthchecks are performed https://docs.cloudfoundry.org/devguide/multiple-processes.html). It mearly tells me how to turn a normal web process into a background process.

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Legend
Legend
Are you sure all processes are properly started in the cloudfoundry droplet? I myself haven't used the Procfiles approach, a quick Google does not really give much info. I'm wondering if the worker and web processes run at the same time, at least with manifests you cannot give two different startup commands for the same app.

I'd ssh into the started container and check that everything is ok in there with ps, maybe some local debugging, etc.

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Creator
Creator

Hi dlouzan,

from what I can see using cf logs MYAPP, both processes seems be work fine. I have managed to get them to work in parallel when they were independent. My problem lies in having them communicating over rabbitmq.

 

About ssh-ing into the container, that was my thought exactly, the 'top' command shows the python command that i running (my python flask web app), but it doesn't display the celery worker even though I can cleary see, in the logs, that it has started properly. Now I don't know if this means that my celery worker crashes instantly after starting (if so, no trace of it in the logs) or if the partitioning of the processes done by Diego is responsible for me not seeing the celery worker.

top_screenshot_no_celery.png

 

EDIT: I pushed the app with a dummy python script in the worker process (instead of trying to start  a celery worker) and even though it is cleary running (regular output to the logs), it doesn't appear as a python process in the top display. Thus, I assume that is it due to the way Diego partitions the processes.

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Legend
Legend
Can you communicate somehow with this hidden process? Like reach an open port and curl to it?

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Creator
Creator

Unfortunately, no. At least I haven't found any documentation about this that could help me find the port that would get me access to a specific Diego process.

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Legend
Legend
How about setting up the app as two independent cf apps, over which you would control in the manifest for exposed ports, etc? That should in principle work but I don't know about your architectural limitations. And of course this might also have an impact on resources needed.

Re: Python: flask app sending tasks to celery background worker in cloudFoundry using rabbitMQ

Creator
Creator

Hi dlouzan,

Thanks for your response. I have chode to go with simple threading of the background jobs within my web app which isn't the cleanest but definitely the easiest way for now.