Google-cloud-bigquery 3.4.2 requires grpcio=1.47.0, but you have grpcio 1.34.1 which is incompatible.Ĭupy-cuda11x 11.0.0 requires numpy=1.20, but you have numpy 1.19.5 which is incompatible.Ĭmdstanpy 1.1.0 requires numpy>=1.21, but you have numpy 1.19.5 which is incompatible.īokeh 2.4.3 requires typing-extensions>=3.10.0, but you have typing-extensions 3.7.4.3 which is incompatible.Īstropy 5.2.1 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible.Īrviz 0.15.1 requires numpy>=1.20.0, but you have numpy 1.19.5 which is incompatible.Īrviz 0.15.1 requires typing-extensions>=4.1.0, but you have typing-extensions 3.7.4.3 which is incompatible. Grpcio-status 1.48.2 requires grpcio>=1.48.2, but you have grpcio 1.34.1 which is incompatible. Jax 0.4.6 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. Jaxlib 0.4.6+cuda11.cudnn86 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. Librosa 0.10.0.post1 requires typing-extensions>=4.1.1, but you have typing-extensions 3.7.4.3 which is incompatible. Matplotlib 3.7.1 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. Pydantic 1.10.6 requires typing-extensions>=4.2.0, but you have typing-extensions 3.7.4.3 which is incompatible. Xarray-einstats 0.5.1 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. Xarray 2022.12.0 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible. This behaviour is the source of the following dependency conflicts. Raise ValueError('No training data provided.')ĮRROR: pip's dependency resolver does not currently take into account all the packages that are installed. SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(),įile "/content/ NON/samplelib/SampleGeneratorFace.py", line 48, in _init_ Model = models.import_model(model_class_name)(įile "/content/ NON/models/ModelBase.py", line 193, in _init_įile "/content/NON/models/Model_SAEHD/Model.py", line 676, in on_initialize Initializing models: 100% 5/5 įile "/content/ NON/mainscripts/Trainer.py", line 46, in trainerThread Press enter in 2 seconds to override model settings. Silent start: choosed device NVIDIA A100-SXM4-40GB Silent start: choosed model "Pretrain320" Thank you Mr Narcissistrader13 for providing a fix.īut it seems the fix has ended, at least for SAEHD training. If you decided to give a unique name to deepfacelab fork say dfl2 then in Collab you have to replace deepfacelab with dfl2 Note: It's wise to not share your GitHub fork and try to give a unique name else Google will add that repo to its ban list as well. Now create a new code and paste this (paste in a single cell or multiple) Remove the entire "Install or update DeepFaceLab" cellī. Now edit the collab file that we use from repo like thisĪ. Then edit requirements-colab.txt in your forked repo to thisĬolab will install instantly compatible with Python 3.9 and it doesn't have to build from scratch (that's why it takes more time to install deepfacelab) Google disconnects your session after 5 minutes since DFL is in their ban list Old code is based on Python 3.6 and it requires 3.9Ģ.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |