专业的编程技术博客社区

网站首页 > 博客文章 正文

windows11 源码本地部署大模型anythingllm

baijin 2024-11-17 06:59:31 博客文章 2 ℃ 0 评论

大家好,今天我们来看看如何利用源码在windows进行本地部署大模型anythingllm。

1.安装 node.js

https://nodejs.org/en/download/prebuilt-installer

2.安装 yarn

npm install -g yarn

3.git 源码拉取

git clone https://github.com/Mintplex-Labs/anything-llm.git

接下来是调试步骤,内容摘录至网络亲测有效,直接按照上面的步骤来,就能运行了,这里面要注意的是输入的命令请在git上面运行,不要在windows的cmd或powershell中运行。

Getting started

  1. Clone the repo into your server as the user who the application will run as. git clone git@github.com:Mintplex-Labs/anything-llm.git
  2. cd anything-llm and run yarn setup. This will install all dependencies to run in production as well as debug the application.
  3. cp server/.env.example server/.env to create the basic ENV file for where instance settings will be read from on service start.
  4. Ensure that the server/.env file has at least these keys to start. These values will persist and this file will be automatically written and managed after your first successful boot.
STORAGE_DIR="/your/absolute/path/to/server/storage"
  1. Edit the frontend/.env file for the VITE_BASE_API to now be set to /api. This is documented in the .env for which one you should use.
# VITE_API_BASE='http://localhost:3001/api' # Use this URL when developing locally
# VITE_API_BASE="https://$CODESPACE_NAME-3001.$GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN/api" # for Github Codespaces
VITE_API_BASE='/api' # Use this URL deploying on non-localhost address OR in docker.

To start the application

AnythingLLM is comprised of three main sections. The frontend, server, and collector. When running in production you will be running server and collector on two different processes, with a build step for compilation of the frontend.

  1. Build the frontend application. cd frontend && yarn build - this will produce a frontend/dist folder that will be used later.
  2. Copy frontend/dist to server/public - cp -R frontend/dist server/public. This should create a folder in server named public which contains a top level index.html file and various other files/folders.

(optional) Build native LLM support if using native as your LLM. cd server && npx --no node-llama-cpp download

  1. Migrate and prepare your database file.
cd server && npx prisma generate --schema=./prisma/schema.prisma
cd server && npx prisma migrate deploy --schema=./prisma/schema.prisma
  1. Boot the server in production cd server && NODE_ENV=production node index.js &
  2. Boot the collection in another process cd collector && NODE_ENV=production node index.js &

AnythingLLM should now be running on http://localhost:3001!

好了,下面就是打开界面进行配置操作了。

错误解决

解决yarn install 一直处在Building fresh packages…

1.打开 C:\Users\用户名.yarnrc
添加
registry "https://registry.npmmirror.com"
sass_binary_site "https://npmmirror.com/mirrors/node-sass/"
phantomjs_cdnurl "http://cnpmjs.org/downloads"
electron_mirror "https://npmmirror.com/mirrors/electron/"
sqlite3_binary_host_mirror "https://foxgis.oss-cn-shanghai.aliyuncs.com/"
profiler_binary_host_mirror "https://npmmirror.com/mirrors/node-inspector/"
chromedriver_cdnurl "https://npmmirror.com/mirrors/chromedriver/"

感谢小伙伴们的支持,希望得到大家的关注和点赞,我们下期见。

Tags:

本文暂时没有评论,来添加一个吧(●'◡'●)

欢迎 发表评论:

最近发表
标签列表