Getting started 🚀
If you want to start even faster, take a look at Shinkai Hosting our ready-to-use service starting at only $7 per month.
Shinkai can be run completely locally bringing you full control and privacy to your personal AI setup. In the following steps, we will create a basic setup with a local LLM and the Shinkai node running on your machine directly in no time.
1. Install a local AI Provider​
To start using Shinkai you first need access to a LLM(Large Language Model). You can use popular existing AI providers (OpenAI, TogetherAI, or other hosted LLM/AI solutions) or go the open-source/local route. In our case, we recommend going with Ollama as a convenient starting point to run any model locally 🎉.
-
Once Ollama is installed, open your terminal and run
ollama pull mistral:7b-instruct-v0.2-q3_K_M
(4.4G) optimized for CPU less powerful than mixtral, which you can install usingollama pull ollama:mistral:7b-instruct-v0.2-q3_K_M
(26G) also optimized for CPU.Local LLM APIThis will expose a local API at http://localhost:11434 that provides access to the mistral model. Shinkai Node will use it as an AI Provider.
2. Install the Shinkai Node​
- Macos
- Linux
- Windows
Open your terminal and run:
# Create a new folder to store your shinkai-node stuffs
mkdir shinkai-node && cd ./shinkai-node
cd ./shinkai-node
# Download shinkai-node binary
curl -o shinkai-node "https://download.shinkai.com/shinkai-node/binaries/aarch64-apple-darwin/shinkai-node-latest"
# Add exec permissions
chmod +x shinkai-node
# Execute shinkai-node
EMBEDDINGS_SERVER_URL="https://public.shinkai.com/x-em" \
UNSTRUCTURED_SERVER_URL="https://public.shinkai.com/x-un" \
FIRST_DEVICE_NEEDS_REGISTRATION_CODE="false" \
INITIAL_AGENT_NAMES="ollama_mistral" \
INITIAL_AGENT_URLS="http://localhost:11434" \
INITIAL_AGENT_MODELS="ollama:mistral:7b-instruct-v0.2-q3_K_M" \
# if you used the more powerful but more performant model you should comment and the line above and uncomment this next one
# INITIAL_AGENT_MODELS="ollama:mixtral:8x7b-instruct-v0.1-q4_K_M" \
INITIAL_AGENT_API_KEYS="" \
./shinkai-node
Open your terminal and run:
# Create a new folder to store your shinkai-node stuffs
mkdir shinkai-node
cd ./shinkai-node
# Download shinkai-node binary
curl -o shinkai-node "https://download.shinkai.com/shinkai-node/binaries/x86_64-unknown-linux-gnu/shinkai-node-latest"
# Add exec permissions
chmod +x shinkai-node
# Execute shinkai-node
EMBEDDINGS_SERVER_URL="https://public.shinkai.com/x-em" \
UNSTRUCTURED_SERVER_URL="https://public.shinkai.com/x-un" \
FIRST_DEVICE_NEEDS_REGISTRATION_CODE="false" \
INITIAL_AGENT_NAMES="ollama_mistral" \
INITIAL_AGENT_URLS="http://localhost:11434" \
INITIAL_AGENT_MODELS="ollama:mistral:7b-instruct-v0.2-q3_K_M" \
# if you used the more powerful but more performant model you should comment and the line above and uncomment this next one
# INITIAL_AGENT_MODELS="ollama:mixtral:8x7b-instruct-v0.1-q4_K_M" \
INITIAL_AGENT_API_KEYS="" \
./shinkai-node
Open a Powershell terminal and run:
# Create a new folder to store your shinkai-node stuffs
mkdir shinkai-node
cd ./shinkai-node
# Download shinkai-node binary
curl -o shinkai-node.exe "https://download.shinkai.com/shinkai-node/binaries/x86_64-pc-windows-msvc/shinkai-node-latest.exe"
# Execute shinkai-node
$ENV:EMBEDDINGS_SERVER_URL="https://public.shinkai.com/x-em"
$ENV:UNSTRUCTURED_SERVER_URL="https://public.shinkai.com/x-un"
$ENV:FIRST_DEVICE_NEEDS_REGISTRATION_CODE="false"
$ENV:INITIAL_AGENT_NAMES="ollama_mistral"
$ENV:INITIAL_AGENT_URLS="http://localhost:11434"
$ENV:INITIAL_AGENT_MODELS="ollama:mistral:7b-instruct-v0.2-q3_K_M"
# if you used the more powerful but more performant model you should comment and the line above and uncomment this next one
# $ENV:INITIAL_AGENT_MODELS="ollama:mixtral:8x7b-instruct-v0.1-q4_K_M" \
$ENV:INITIAL_AGENT_API_KEYS=""
./shinkai-node.exe
If you prefer to run everything locally, see the instructions at the end of this document on how to set up the EMBEDDING ROUTER and the UNSTRUCTURED API on your machine.
3. Install Shinkai Visor​
-
Download and install Shinkai Visor from the Chrome Web Store.
-
Open Shinkai Visor, accept the terms and press Connect
-
Since you're running the Shinkai node locally, the default local host ip/port should automatically pick up your node and work instantly.
-
From here you can test to make sure Ollama is set up correctly as well by creating your first conversation in Shinkai Visor. Follow the video below for more detailed steps:
Local Installation Instructions​
If you've chosen to run the EMBEDDING ROUTER and the UNSTRUCTURED API locally, follow the detailed instructions below.
EMBEDDING ROUTER​
Preparation for Linux​
First, ensure your system has the necessary dependencies:
sudo apt-get install pkg-config libssl-dev gcc g++ make -y
Common Installation Steps​
-
Install Rust
Rust is required to build the embedding router. Install it using the following command:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
-
Clone the Source Code
Get the latest version of the text embeddings inference project:
git clone --depth=1 https://github.com/huggingface/text-embeddings-inference.git
-
Build the Project
Navigate to the project directory and build the router. The build command differs based on your CPU architecture:
- For x86 systems:
cd text-embeddings-inference
cargo install --path router -F candle -F http
- For M1 or M2 (Apple Silicon):
cargo install --path router -F candle -F metal
-
Run the Router
Start the text embeddings router with the following command:
text-embeddings-router --hostname 0.0.0.0 --port 9081 --model-id sentence-transformers/all-MiniLM-L6-v2 --revision refs/pr/21 --dtype float32
-
Configure Shinkai Node
Update the Shinkai node configuration to use the local embeddings server:
EMBEDDINGS_SERVER_URL=http://localhost:9081/
For more advanced installation information and options, visit the official repository.
UNSTRUCTURED API​
Preparation for Linux​
Ensure your Linux system has all the required dependencies:
sudo apt-get install pkg-config libssl-dev gcc g++ make -y
sudo apt-get install libsqlite3-dev libffi-dev zlib1g-dev libbz2-dev libgl1 -y
Setup Pyenv​
Pyenv is a popular tool for managing multiple Python versions. Set it up by following these steps:
git clone --depth=1 https://github.com/pyenv/pyenv.git ~/.pyenv
git clone --depth=1 https://github.com/pyenv/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.profile
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.profile
echo 'eval "$(pyenv init -)"' >> ~/.profile
For Mac/Homebrew users:
brew install pyenv
brew install pyenv-virtualenv
Common Installation Steps​
-
Install Python 3.10.12
Use pyenv to install a specific version of Python:
pyenv install 3.10.12
-
Clone the Unstructured API Source Code
Obtain the latest version of the Unstructured API project:
git clone --depth=1 https://github.com/Unstructured-IO/unstructured-api.git
-
Build the Project
Prepare your environment and install dependencies:
cd unstructured-api
pyenv virtualenv 3.10.12 unstructured-api
pyenv activate unstructured-api
make install-base
-
Run the Unstructured API
Start the Unstructured API service:
make run-web-app
-
Configure Shinkai Node
Point the Shinkai node to use your local Unstructured API server:
UNSTRUCTURED_SERVER_URL=http://localhost:8000/
For more detailed and advanced installation information, refer to the official Unstructured API repository.