R Script Executor with LLM-Powered Insights
Introduction
In this tutorial you will build a tool to quickly run R scripts from Shinkai, and use it to easily interact with your scripts text outputs via a LLM.
You will learn how to :
- use an executable path to run another software from Shinkai
- use LLMs to interact with your R scripts text outputs
- build the tool using the Shinkai AI assistance
- code the tool
- integrate error logging at each step
Prerequisites
Before starting this tutorial :
- Install and open Shinkai Desktop
- Install R
- Create an R project with at least 1 script
- Note these 3 paths : Rscript executable, R script to run, R project root directory
This tutorial is split into 3 parts, and you can skip to the one that interests you without missing essential context.
Note : The purpose of the path of the project root directory in input is to ensure the R script runs with its original working directory context, which helps maintain reliable file access patterns, especially if the R script uses relative file paths (e.g. data/file.csv). Without this, the script would still execute, but its ability to find and interact with project files would be compromised. Well organised R projects do use a project root directory and relative file paths to keep things organised and functional.
Part 1 : Building a R script executor tool using Shinkai AI assisted tool creation UI
Shinkai offers an effortless tool building experience thanks to its AI assisted tool creation UI, where even libraries dependencies and tool metadata are handled automatically.
In the tool creation UI :
- select a performant LLM (e.g. gpt_4o, shinkai_free_trial)
- select a programming language (we’ll use Python in this tutorial)
- write a prompt describing the tool well and execute it
For a good result your prompt should be detailed and clearly describe :
- the task the tool should accomplish and how
- what you would want in configuration versus inputs
- how to handle errors
Below is an example of a promt to generate a full prototype of the R script executor tool. It uses tags to make things clear for the LLM.
Such prompt can create the tool successfully. At the very least it should create a good code flow for the intended tool, from which you can debug, edit, improve (both using prompts and manual code editing). If you get error messages, you can copy paste them to the AI assistance eventually with added instructions and it should be able to fix the tool. Once the tool is working, make sure to edit the metadata to make it as informative and useful as possible.
Below is the detailed code and metadata of a R script executor tool.
Part 2 : Full code for a R script executor tool
The tool does the following :
-
sets up configuration, input, and output classes
-
checks if R executable, script, and project directory exist
-
switches to the project directory
-
prepares and runs the R script using subprocess
-
captures outputs and errors from the script
-
returns to the original directory
-
returns the results (success/failure, output, errors)
Here is an annotated code for the tool, including explanations for all steps.
And here is good metadata for the tool. Make sure to :
- Give concrete examples for inputs and configurations. Be super explicit (e.g. example of a R script file path). It helps users and yourself know or recall the precise formats (or values, options, etc.) required or possible.
- Pick useful keywords.
- Write a clear but thorough tool description.
Part 3 : Using the tool, interacting with R scripts text outputs from Shinkai
First, enter your Rscript executable path in the tool configuration.
In Shinkai chat you can use the tool :
- select an adequate LLM : a performant LLM able to understand complex and potentially long context (especially if executing advanced R scripts), or a tailored LLM specifically performant on the type of outputs your R script creates
- type ’/’ to access the list of available tools
- select the R Script Executor tool
- add your 2 inputs paths : R script and R project root directory
- add a prompt to interact with the text outputs of the R script
- press ‘enter’ or click on the arrow to send
When you execute the tool, the full text output (a.k.a. prints, or console logs) of your R script execution is passed as context to the LLM answering you. This R output contains line by line the script executed, the resulting outputs, the errors.
So you can ask questions about :
- The R script itself
- The text outputs of its execution (results and eventual errors)
Here are some prompt examples :
- “I am trying to learn how this R script functions. Explain me its global process, and then each steps more in details.”
- “Based on this R script and its execution results, what steps could I add to get more detailed error logs ?”
- “Remind me which paramaters are used for ‘given task/step/function/etc.’ in this R script. And based on the results suggest better parameters.”
- “Among all the generated models by this R script, which one is the most accurate ? Show me its parameters and accuracy results.”
- Any question about the data the script outputs in the console.
Below are some response examples from the R Script Executor tool.
Answering which linear model is the best :
Summarizing :
Showing an extract of the data :
Error handling :
Next Steps
Consider adding the functionality to display and open files generated by R scripts (plots, images, maps, data files, models, etc.).
Alternatively, integrate the tool as-is with an AI agent that can access additional tools for tasks like reading and analyzing various file types within an R project directory.
In the R execution command, you can modify the —no-save and —vanilla paramaters to change the behavior of the script execution (e.g. loading/saving the workspace), or make them accessible from configuration or input.
Was this page helpful?