Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need assistance with running Pathml package via Colab or Docker and resolving errors #350

Open
DRSEI opened this issue May 15, 2023 · 6 comments

Comments

@DRSEI
Copy link

DRSEI commented May 15, 2023

I have encountered some difficulties while attempting to use the Pathml package via both Colab and Docker. I am unable to run my script or use my own file to execute the script successfully. I would appreciate any guidance or assistance in resolving these problems.

When using Docker, I am experiencing frequent crashes. The error message I receive is as follows:

Kernel Restarting
The kernel for Untitled.ipynb appears to have died. It will restart automatically.

On the other hand, when I run the code via Colab, I encounter an issue in the following section:

slidedata.run(pipe, distributed=False, tile_pad=False);

The error message associated with this problem is as follows:

IndexError: index 28 is out of bounds for axis 3 with size 28.

To provide more context, I have formatted the QuPath output, and I have made the file accessible through the following link: https://colab.research.google.com/drive/12Iz2ov-GojJ-0zzRrsxHNfn53LqfhhCz?usp=sharing.

https://docs.google.com/spreadsheets/d/13Ipoo2prIIK9xt8rF8Nuj9X-dXXTrIeK/edit?usp=sharing&ouid=117700413165074195674&rtpof=true&sd=true, https://drive.google.com/file/d/1BPVAWpe4ZyxIDSzPADo2uI8VSrAGq4v5/view?usp=sharing, https://drive.google.com/file/d/1DKvEnoEtO82AN3IYvmBCLEVJQXUTFDaR/view?usp=sharing

I would greatly appreciate any guidance or assistance in troubleshooting and resolving these issues. Thank you for your time and support.

@jacob-rosenthal
Copy link
Collaborator

So from what I can see in the notebook, the image has 28 channels total, and you want to use the last channel as the cytoplasm channel? If so, then you would use index 27 (cytoplasm_channel=27) because the indexing system is zero-based. That might explain why the error message is about index being out of range.

The crashing jupyter kernel is harder to diagnose. Maybe it could be due to memory constraints of your machine and holding too much data in memory at once.

@DRSEI
Copy link
Author

DRSEI commented May 16, 2023

Hello @jacob-rosenthal , I appreciate your previous assistance. However, I'm currently encountering additional issues. Please refer to the following information:

`
INFO:distributed.http.proxy:To route to workers diagnostics web server, please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
INFO:distributed.scheduler:State start
INFO:distributed.scheduler:Scheduler at tcp://127.0.0.1:34389
INFO:distributed.scheduler:Dashboard at 127.0.0.1:8787
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:46283'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:37099'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:34189'
INFO:distributed.nanny:Start Nanny at 'tcp://127.0.0.1:33063'
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:42611', name: 3, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:42611
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46628
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:39977', name: 2, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:39977
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46618
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:33903', name: 1, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:33903
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46602
INFO:distributed.scheduler:Register worker <WorkerState 'tcp://127.0.0.1:39173', name: 0, status: init, memory: 0, processing: 0>
INFO:distributed.scheduler:Starting worker compute stream, tcp://127.0.0.1:39173
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46638
INFO:distributed.scheduler:Receive client connection: Client-48ecb7b6-f381-11ed-90ba-0242ac1c000c
INFO:distributed.core:Starting established connection to tcp://127.0.0.1:46642

`
Unfortunately, the code encounters an error during execution:

`---------------------------------------------------------------------------
error Traceback (most recent call last)
in <cell line: 2>()
1 # Run the pipeline
----> 2 slidedata.run(pipe, distributed = True, tile_pad=False)
3
4
5

16 frames
/usr/lib/python3.10/gzip.py in read()
494 buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
495
--> 496 uncompress = self._decompressor.decompress(buf, size)
497 if self._decompressor.unconsumed_tail != b"":
498 self._fp.prepend(self._decompressor.unconsumed_tail)

error: Error -3 while decompressing data: invalid block type`

error: Error -3 while decompressing data: invalid block type

@jacob-rosenthal
Copy link
Collaborator

I've never seen that before. Not sure what is causing it. Seems likely that it's due to some incompatibility between dask and colab. I'd suggest trying with distributed=False

@DRSEI
Copy link
Author

DRSEI commented May 16, 2023

I have followed your suggestion, but now I am encountering a new issue.

`WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:1755: FutureWarning: The AnnData.concatenate method is deprecated in favour of the anndata.concat function. Please use anndata.concat instead.

See the tutorial for concat at: https://anndata.readthedocs.io/en/latest/concatenation.html
warnings.warn(
WARNING:tensorflow:No training configuration found in save file, so the model was not compiled. Compile it manually.
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:117: ImplicitModificationWarning: Transforming to str index.
warnings.warn("Transforming to str index.", ImplicitModificationWarning)
/usr/local/lib/python3.10/dist-packages/anndata/_core/anndata.py:1755: FutureWarning: The AnnData.concatenate method is deprecated in favour of the anndata.concat function. Please use anndata.concat instead.
`

The execution of my code, specifically the line "# Run the pipeline slidedata.run(pipe, distributed=False, tile_pad=False)", has been running for over 10 minutes for a single file. I'm wondering if there is a Colab training notebook available that I can refer to for guidance. Alternatively, would you mind reviewing the script I posted earlier to see if I made any mistakes? I appreciate your assistance in troubleshooting this matter.

@jacob-rosenthal
Copy link
Collaborator

The warnings should be fine to ignore. The workflow in the colab notebook you posted looks fine to me. You can refer to the example vignettes in the documentation - they're at https://pathml.readthedocs.io/ under the "examples" section. Runtime will depend on several factors, including computational resources of the environment you are running it in, size of the input data, what steps are in the pipeline, etc. In this case, inference is being run with the mesmer model for every tile which could be relatively slow. From my experience, 10min is not out of the ordinary; I have run the same pipeline on large images taking up to 24hrs.

@DRSEI
Copy link
Author

DRSEI commented May 16, 2023

I understand. I'll let the process continue running for a while, and I'll keep you updated on any progress or developments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants