Finetuner 0.6.3 Update

Modern black background with "Feature Update" in neon text, a Finetuner logo on the left, and a "Read More" tag

Finetuner makes neural network fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure requirements in the cloud. With Finetuner, one can easily enhance the performance of pre-trained models and make them production-ready without expensive hardware.

GitHub - jina-ai/finetuner: Task-oriented finetuning for better embeddings on neural search
:dart: Task-oriented finetuning for better embeddings on neural search - GitHub - jina-ai/finetuner: Task-oriented finetuning for better embeddings on neural search

This release covers Finetuner version 0.6.3, including dependencies finetuner-api 0.4.2 and finetuner-core 0.10.4.

It contains 2 new features, 2 bug fixes, and 1 documentation improvement.

๐Ÿ†• Features

Allocate more GPU memory in GPU environments

Previously, the run scheduler was allocating 16GB of VRAM for GPU runs. Now, it allocates 24GB.

Users can now fine-tune significantly larger models and use larger batch sizes.

Add WiSE-FT to CLIP finetuning (#571)

WiSE-FT is a recent development that has proven to be an effective way to fine-tune models with a strong zero-shot capability, such as CLIP. We have added it to Finetuner along with documentation on its use.

Finetuner allows you to apply WiSE-FT easily using WiSEFTCallback. Finetuner will trigger the callback when the fine-tuning job is finished and merge the weights between the pre-trained model and the fine-tuned model:

from finetuner.callbacks import WiSEFTCallback

run = finetuner.fit(
    model='ViT-B-32#openai',
    ...,
    loss='CLIPLoss',
    callbacks=[WiSEFTCallback(alpha=0.5)],
)

See the documentation for advice on how to set alpha.

๐Ÿž Bug Fixes

Fix image normalization for CLIP models (#569)

  • Finetuner's image processing was not identical to that used by OpenAI for training CLIP, potentially leading to inconsistent results.
  • The new version fixes the bug and matches OpenAI's preprocessing.

Add open_clip to FinetunerExecutor requirements

The previous version of FinetunerExecutor failed to include the open_clip package in its requirements, forcing users to add it manually to their executors. This has now been repaired.

๐Ÿ“— Documentation Improvements

Add callbacks documentation (#564)

There is now full documentation for using callbacks with Finetuner.

๐ŸคŸ Contributors

We would like to thank all contributors to this release:

  • Wang Bo (@bwanglzu)
  • Louis Milliken (@LMMilliken)
  • Michael Gรผnther (@guenthermi)
  • George Mastrapas (@gmastrapas)