


- #SPLUNK TUTORIAL WIKI HOW TO#
- #SPLUNK TUTORIAL WIKI INSTALL#
- #SPLUNK TUTORIAL WIKI UPDATE#
- #SPLUNK TUTORIAL WIKI SOFTWARE#
- #SPLUNK TUTORIAL WIKI CODE#
Pip3 install loguru-0.7.0-p圓-none-any.whl It can be handled manually in the following order. The YOLOX example in this tutorial relies on a large number of python packages, download the pre-compiled python packages git clone -b python3.11 The python ecosystem for the RISC-V architecture is still lacking, but in the future, packages dependent on YOLOX can be installed directly from the requirements.txt file.
#SPLUNK TUTORIAL WIKI CODE#
The code uses to specify the search path, thus eliminating the need to install the YOLOX installer from the source code. In the onnxruntime example directory in the source code, modify the beginning of the file demo/ONNXRuntime/onnx_inference.py to add two new lines of code #!/usr/bin/env python3 This tutorial will use the HHB-onnxruntime execution model, so switch to. The source code and model can be downloaded directly from github git clone YOLOX is a YOLO-like target detection model with quite excellent performance.
#SPLUNK TUTORIAL WIKI HOW TO#
See download riscv whl for more information on how to get the packages. The opencv installation will depend on other python packages, so if pip does not download them automatically, you can install the dependencies manually first. Similar to other architectures, you can install pure python packages directly via pip install. Most of the packages that the various python programs depend on can be installed via pip, which can be installed with the following command apt install python3-pipīefore installing other python packages, install the venv package, which is used to create a python virtual environment apt install python3.11-venvĬreate a python virtual environment and activate it cd /rootĪt this point, the basic python environment has been created. We will use python 3.11 as an example, but for other versions, you will need to change to the corresponding version of the command when installing dependencies. You can confirm this with the following command python -version Python version 3.11 is installed by default on the system where LPi4A is burned.
#SPLUNK TUTORIAL WIKI SOFTWARE#
Install some software for subsequent use in the example sudo apt install wget git vimĬp c920/lib/* /usr/lib/riscv64-linux-gnu/ -rf
#SPLUNK TUTORIAL WIKI UPDATE#
Refer to the description in LPi4A's " Out-of-the-box experience", install the development board correctly, and enter with root privileges after powering on and booting up.Įnsure that you are connected to the Internet, and update the apt source. Example execution using HHB-onnxruntime on LPi4Aīasic Hardware and Software Configuration.Installing python packages that yolox depends on.Obtaining the yolox source code and models.Basic Python environment configuration on LPi4A.The tutorial follows the usual model deployment process: Executing the model using the source code from the YOLOX project.Installing the Python environment on the LPi4A development board.This tutorial is an example of how to deploy the YOLOX model to accomplish target detection on the LPi4A (LicheePi 4A) development board platform. The feasibility of running the 7B model on an entry-level C906 core was also briefly tested, and due to the small amount of memory in the D1 and the use of mmap read-only extensions, a large number of low-speed IO operations were introduced, which slowed down the speed of the run, ending up at only 18s/token. zh/lichee/th1520/lpi4a/assets/application/llama_th1520.png) You can see that TH1520 takes about 6s to compute a token (without V-extension acceleration, which is expected to accelerate it by a factor of 4-8, so feel free to pitch in if you've added V-extension support!) Zepan slightly modified llama.cpp earlier to allow it to run the 7B model with less memory (down to about 700MB). Thanks to llama.cpp, an excellent project, we can run LLM on LicheePi 4A. Llama is the META Open Source Large Language Model, and llama.cpp is the ggerganov Open Source pure cpp runtime llama inference project.
