Tasks status shows PENDING even after is was SUCCESS

Greetings!


I have noticed, that after I fitted the model and generated a synthetic dataset with the status SUCCESS or FAILURE, after some time I see the status as PENDING for all the datasets.


In more details:

I store the task_id in the model metadata just after we make a call on:

om.runtime.model(model_store_name).fit(...)

and

om.runtime.model(model_store_name).predict(...)


Then get the task status as:

tasks = meta_data.attributes.get('tasks')
if tasks:
    task_id = tasks[dataset_store_name]
    task = om.runtime.celeryapp.AsyncResult(task_id)
    task_status = task.status

So, when I make a call to get status right after the fit-generate was requested I get a PENDING status.

Sometime later it shows the SUCCESS status, and everything seems to work fine. I even have a dataset that throws a FAILURE status (for test purposes).

But, after a while when I come back all the datasets show PENDING. Despite that I already have the model fitted, and synthetic dataset generated.


Currently, I provided a workaround for it, like:

if task_status == TaskStatus.PENDING and om.datasets.list(synthetic_dataset_store_name):
    task_status = TaskStatus.SUCCESS


But seems like this issue should be solved somehow, not avoided.

Could you, please, help me with finding the reason why this is happening and what can be done to avoid it?


Best regards,

Daria

Comments

  • edited March 29

    But, after a while when I come back all the datasets show PENDING. Despite that I already have the model fitted, and synthetic dataset generated.

    omegaml tasks are submitted to Celery with a result expiration of 1 hour (3600 seconds). After this time the result is deleted from the results backend. This is a precautionary measure to avoid unbounded resource use in the backend due to large number of tasks being submitted. Thus task results should only be used to track the actual execution. Once the task has been deleted from the backend, the state will return to PENDING again (which is Celery's default state and actually means "unknown").

    what can be done to avoid it?

    If you need to know the task status beyond the actual task tracking, the best practice is to record this e.g. in the model metadata (assuming the task is for fitting a model). e.g.

    result = om.runtime.model('mymodel').fit(...)
    ...
    if result.status == 'SUCCESS':
        meta = om.models.metadata('mymodel')
        meta.attributes['fit_task'] = 'SUCCESS'  
        meta.save()
    

    PS: There is an enhancement request at https://github.com/omegaml/omegaml/issues/114 - please be sure to vote for it if you find it would be a useful feature.

  • Thank you for the response!

    It is a great suggestion. Will implement it!


    Best regards,

    Daria

Sign In or Register to comment.