AI Tools & GPT AGENTS Directory

stabilityai/StableBeluga2 · Hugging Face

May 17, 2024
stabilityai/StableBeluga2 · Hugging Face

Discover the Stable Beluga 2: A Guide to Advanced Language Modeling

In the realm of natural language processing, Stable Beluga 2 stands out as a significant advancement. Crafted by Stability AI, this model represents cutting-edge technology in text generation. Its foundation is on the impressive Llama2 70B structure but further refined with an Orca-style dataset, elevating its capabilities beyond its predecessors.

How to Use Stable Beluga 2

Using Stable Beluga 2 is relatively straightforward once you've set up your coding environment. Below is a basic guide on how to interact with the model using Python:

Firstly, you’ll need to import the necessary libraries and load the model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga2", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga2", 
                                             torch_dtype=torch.float16, 
                                             low_cpu_mem_usage=True, 
                                             device_map="auto")

Next, set up your prompt structure:

system_prompt = """
#### System:
You are Stable Beluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.

#### User:
Write me a poem please

#### Assistant:
"""

Then you can generate a response from the model:

prompt = f"{system_prompt}"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)

print(tokenizer.decode(output[0], skip_special_tokens=True))

Explore Other Models in the Beluga Series

For those interested in different capacities or variations of the model, Stability AI offers a series of Beluga options such as:

· StableBeluga 1

· DeltaStableBeluga 13B

· StableBeluga 7B

Each of these models brings its unique attributes to the table, catering to diverse needs within text generation tasks.

Model Details and Specifications

Stable Beluga 2's architecture is based on an auto-regressive framework, enabling it to provide coherent and contextually relevant text. The model performs best with English, tested extensively to ensure quality outputs.

Training Made Transparent

The training process of Stable Beluga 2 hinges on a robust internal dataset inspired by Orca. This ensures the model's versatility and adaptability to various text generation scenarios. The training involves:

· Mixed-precision learning for efficiency

· AdamW optimizer for improved convergence

· Batch sizes and learning rates fine-tuned for best performance

Ethical Use and Limitations

While Stable Beluga 2 represents a leap forward in language models, it's essential to remember it's still a tool with certain limitations. It

Similar AI Tools & GPT Agents