Home Docs Quick Start

Getting Started with NeuralFlow

Go from zero to deployed ML model in under 5 minutes. This guide covers installation, authentication, and your first model deployment.

# Installation

Install the NeuralFlow SDK for your preferred language. All SDKs support Python 3.8+, Node.js 16+, or Go 1.19+.

terminal
# Install the Python SDK pip install neuralflow # Or with optional ML frameworks pip install neuralflow[torch,tensorflow] # Verify installation python -c "import neuralflow; print(neuralflow.__version__)" # Output: 2.4.1
terminal
# Install via npm npm install @neuralflow/sdk # Or via yarn yarn add @neuralflow/sdk # Verify node -e "console.log(require('@neuralflow/sdk').version)"
terminal
// Install the Go SDK go get github.com/neuralflow/sdk-go@latest
Tip: We recommend using a virtual environment for Python installations. Run python -m venv .venv before installing.

# Authentication

All API requests require an API key. Generate your key from the NeuralFlow Dashboard under Settings > API Keys.

Set your API key

You can set your key as an environment variable or pass it directly:

auth.py
Python
import os from neuralflow import NeuralFlow # Option 1: Environment variable (recommended) os.environ["NEURALFLOW_API_KEY"] = "nf_sk_your_key_here" client = NeuralFlow() # Option 2: Direct initialization client = NeuralFlow(api_key="nf_sk_your_key_here") # Option 3: CLI login (interactive) # $ neuralflow auth login
Security: Never commit API keys to version control. Use environment variables or a secrets manager in production.

# Your First Model

Let's train and deploy a sentiment analysis model in just a few lines of code.

1

Initialize the client

Create a NeuralFlow client instance with your API key.

2

Load or create a dataset

Upload training data or connect to an existing data source.

3

Train the model

Start a training run with AutoML or specify your own architecture.

4

Deploy to production

Push the trained model to a production endpoint with autoscaling.

quickstart.py
Python
from neuralflow import NeuralFlow, Dataset, Model # Initialize client nf = NeuralFlow() # Create dataset from CSV dataset = Dataset.from_csv( name="sentiment-data", path="./reviews.csv", text_column="review", label_column="sentiment" ) # Train with AutoML model = Model.train( name="sentiment-v1", task="text-classification", dataset=dataset, epochs=10, auto_optimize=True ) # Deploy to production deployment = model.deploy( target="production", autoscale=True, min_replicas=2, max_replicas=10 ) print(f"Deployed at: {deployment.endpoint}") # Make a prediction result = deployment.predict("This product is amazing!") print(result) # {"label": "positive", "confidence": 0.97}
Note: Training time depends on dataset size and model complexity. A typical text classifier with 10K samples trains in under 5 minutes on GPU.

# Python SDK

The Python SDK is the most feature-complete client, supporting all NeuralFlow platform capabilities including AutoML, custom training loops, and real-time inference.

Key Classes

models.py
Python
from neuralflow import NeuralFlow client = NeuralFlow() # List all models models = client.models.list() # Get a specific model model = client.models.get("model_abc123") # Get model metrics metrics = model.metrics() print(f"Accuracy: {metrics.accuracy}") print(f"F1 Score: {metrics.f1_score}") # Model versioning versions = model.versions() model_v2 = model.rollback(version=2)

# Node.js SDK

The Node.js SDK supports async/await patterns and TypeScript out of the box.

predict.ts
TypeScript
import { NeuralFlow } from '@neuralflow/sdk'; const nf = new NeuralFlow({ apiKey: process.env.NEURALFLOW_API_KEY }); async function classify(text: string) { const model = await nf.models.get('sentiment-v1'); const result = await model.predict(text); return result; } classify("NeuralFlow is incredible!") .then(r => console.log(r)); // { label: "positive", confidence: 0.95 }

# Go SDK

The Go SDK is currently in beta. It supports model inference and basic management operations.

main.go
Go
package main import ( "fmt" "github.com/neuralflow/sdk-go" ) func main() { client := neuralflow.NewClient( neuralflow.WithAPIKey(os.Getenv("NEURALFLOW_API_KEY")), ) model, _ := client.Models.Get("sentiment-v1") result, _ := model.Predict("Great product!") fmt.Printf("Label: %s, Confidence: %.2f\n", result.Label, result.Confidence) }

# REST API

Use the REST API directly when SDKs are not available for your language. All endpoints use JSON and require Bearer token authentication.

request.sh
cURL
curl -X POST https://api.neuralflow.ai/v1/models/sentiment-v1/predict \ -H "Authorization: Bearer nf_sk_your_key" \ -H "Content-Type: application/json" \ -d '{"input": "This product exceeded my expectations!"}' # Response: # { # "prediction": {"label": "positive", "confidence": 0.96}, # "model_version": "v3", # "latency_ms": 8 # }

# API Reference: Models

The Models API lets you create, train, manage, and deploy machine learning models programmatically.

Endpoints

MethodEndpointDescription
GET/v1/modelsList all models
POST/v1/modelsCreate a new model
GET/v1/models/{id}Get model details
PUT/v1/models/{id}Update model config
DELETE/v1/models/{id}Delete a model
POST/v1/models/{id}/trainStart training run
POST/v1/models/{id}/deployDeploy to production
POST/v1/models/{id}/predictRun inference

Create Model Request

create_model.json
JSON
{ "name": "sentiment-classifier", "task": "text-classification", "framework": "pytorch", "config": { "base_model": "distilbert-base-uncased", "num_labels": 3, "learning_rate": 2e-5, "epochs": 10, "batch_size": 32 } }

# API Reference: Datasets

Upload, manage, and version your training datasets through the Datasets API.

MethodEndpointDescription
GET/v1/datasetsList all datasets
POST/v1/datasetsUpload new dataset
GET/v1/datasets/{id}Get dataset details
GET/v1/datasets/{id}/statsGet dataset statistics
DELETE/v1/datasets/{id}Delete a dataset

# API Reference: Deployments

Manage production deployments, autoscaling rules, and rollback strategies.

deploy_config.py
Python
from neuralflow import DeploymentConfig config = DeploymentConfig( target="production", region="us-east-1", autoscale=True, min_replicas=2, max_replicas=20, gpu="nvidia-a100", health_check="/health", rollback_on_failure=True, canary_percentage=10 )

# API Reference: Predictions

Run real-time or batch predictions against deployed models.

Batch Predictions

batch_predict.py
Python
from neuralflow import NeuralFlow nf = NeuralFlow() model = nf.models.get("sentiment-v1") # Batch prediction texts = [ "Love this product!", "Terrible experience.", "Pretty good overall." ] results = model.predict_batch(texts) for r in results: print(f"{r.input[:30]} -> {r.label} ({r.confidence:.2f})") # Output: # Love this product! -> positive (0.97) # Terrible experience. -> negative (0.94) # Pretty good overall. -> positive (0.72)

# API Reference: Monitoring

Track model performance, detect data drift, and set up automated alerts.

monitoring.py
Python
from neuralflow import Monitor monitor = Monitor(model_id="model_abc123") # Check for data drift drift = monitor.check_drift() print(f"Drift score: {drift.score}") print(f"Drifted features: {drift.features}") # Set up alerts monitor.add_alert( metric="accuracy", threshold=0.85, condition="below", notify=["slack", "email"] ) # Enable auto-retrain on drift monitor.auto_retrain( trigger="drift", drift_threshold=0.15, dataset="latest" )
Next steps: Check out the Deployment Guides for cloud-specific configuration, or visit the Community Forum if you have questions.