Introduction

In this tutorial, you will learn how to create in Shinkai a blockchain ecosystem data aware AI Agent. Such AI Agent would be able to talk about up-to-date blockchain data, including topics like tokens and NFT data, DEXs activity and liquidity pools details, portfolio of addresses, stablecoin pegs, etc.

You will learn about :

  • choosing data sources
  • defining tools
  • building useful tools
  • setting up the AI Agent
  • using the AI Agent

Our concrete example will be a Cardano blockchain AI Agent, which accesses data through 6 related tools using the Taptools API, and that are already available in the Shinkai AI Store. We will refer to the code of these tools (search for ‘Taptools’ in the Shinkai AI Store). You can see the capabilities of this Cardano AI Agent in this video.

This tutorial is a detailed overview of the design and implementation process of such a blockchain data aware AI Agent, but not a step-by-step guide on how to code all of its tools.

Prerequisites

To best follow this tutorial, you will need :

  • the latest version of the Shinkai desktop App
  • the Taptools tools installed from the Shinkai AI Store
  • to duplicate these tools and open them in the tool playground to see their code, metadata and raw outputs
  • a Taptools API key if you want to test the example AI Agent and tools
  • additional API keys if testing all the features (CoinMarketCap, ElevenLabs)

Part 1 : Choosing data sources

A blockchain AI Agent needs external source(s) of real time on-chain data, as such up-to-date information is obviously not part of the training data of the underlying LLM. The first step in designing the AI Agent is to choose appropriate sources of data to enable the desired capabilities of the AI Agent.

Some possible sources of live on-chain data are :

  • local indexer with data fetching capabilities :

The blockchain AI Agent would get data by querying directly your local indexer, most likely linked to your local node. For Cardano, such indexers include : kupo, koios, utxorpc, cardano-graphql, among others. The Agent tools would then format the data as needed. Such indexers might also be hosted by service providers.

  • blockchain data provider services :

The blockchain AI Agent would get data by calling API endpoints from specialised data providers. Such providers likely run their own nodes and implement their own data formatting on top of raw on-chain data. For Cardano, such platforms include Maestro, Blockfrost, Taptools.

  • decentralized blockchain data protocols :

The blockchain AI Agent would get data by accessing organised data that is structured and indexed through a decentralised blockchain data protocol. Such data sources include for example The Graph Protocol.

  • crypto wide analytics and market platforms :

The blockchain AI Agent would get data by querying formatted data from platforms specialised in blockchain analytics about protocols, valuations, etc. Such platforms include DeFiLlama, CoinGecko, etc.

To guide your choice of data sources and of precise data, think about the content and the format of the information you want your blockchain AI Agent to be able to output.

Here our example will use the Taptools API. Taptools is the dominant platform for data about Cardano tokens and NFTs trading as well as wallet tracking. Concretely, our Cardano AI Agent will use tools that call endpoints of the Taptools API, and then use and arrange the data to create useful outputs interpretable by the AI Agent.

Part 2 : Defining tools

2.1 Choosing tools

To create an advanced blockchain data AI Agent with various capabilities, you will need multiple tools. According to the users’ prompt, the AI Agent will use the most appropriate of its tools to answer and execute the required task(s). This leads to some requirements.

Tools should be :

  • standalone :

Each tool should cover a unique but complete set of capabilities and/or data. This will ensure that when the AI Agent uses a given tool it will have all it needs to perform the corresponding task completely.

  • distinct and easy to trigger :

It should be easy for the AI Agent to know with high accuracy which tool to use for a given task.

According to these principles, below are the 6 tools that our Cardano blockchain AI Agent uses (available in Shinkai AI Store), and that allow it to speak about the NFT market, about the tokens trading activities, about the peg stability of stablecoins, synthetics and bridged assets, and also allow it to plot trading charts, to analyze a given token, and to give details of the portfolio of any address :

  • Cardano Native Tokens Market Stats Aggregator :

This tool retrieves and aggregates market data for native tokens. It fetches data on the top tokens by liquidity, market capitalization (either market cap or fully diluted value), and volume. The tool can display prices, liquidity, market cap, and volume in ADA, as well as a configurable quote currency. It also retrieves token descriptions and links (website, Twitter, etc). For volume, it offers a configurable timeframe. Additionally, it retrieves and displays 24-hour market statistics, like DEX trading volume (in ADA and the selected quote currency) and active addresses. The tool allows configurable input for the number of tokens to retrieve for each category, timeframe and the quote currency.

  • Cardano NFT Market Stats Aggregator :

This tool retrieves and presents Cardano NFTs market data. It retrieves overall market statistics (volume, buyers, sellers, and percentage changes), it identifies today top NFT collections based on customizable ranking criteria (and fetches additional collection details like social links), it calculates total trading volume over specified timeframes, and presents lists of top collections by volume. It delivers personalized and user-friendly summaries of the Cardano NFTs market landscape.

  • Cardano Native Token Trading Chart Generator :

This tool generates a candlestick chart of token price (in Ada) and trading volume for the chosen token ticker. You can choose your chart interval and number of intervals in your prompt.

  • Cardano Depeg Scanner :

This tool aggregates data for Cardano stablecoins, synthetics, and bridged assets. It performs a depeg scan. It outputs a .csv file with the aggregated data. You can define your own alert threshold in the configuration for each asset. You can easily add or remove tokens to track from the code. You can choose a timeframe for trading stats in your prompt. This tool can answer any question related to the data it formats in the generated .csv file.

  • Cardano Portfolio Tracker (tutorial) :

This tool presents and plots cryptocurrency portfolio data for a specified Cardano wallet address. It retrieves portfolio positions, including fungible tokens, NFTs, and liquidity provider positions, and calculates their values in both ADA and a user-defined quote currency. The tool can generate bar plots for portfolio positions and trend graphs for portfolio value over customizable timeframes, with options to filter out low-value assets. It outputs detailed portfolio data and visualizations saved as PNG files, making it a versatile tool for cryptocurrency portfolio monitoring and analysis.

  • Cardano Native Token Analyzer :

This tool fetches Cardano Native Tokens information, including price, market cap, trading statistics, liquidity pool data, and holder distribution. It can also optionally generate a trading chart, a Lorenz curve graph, a full report exportable to pdf, and a podcast script as well as its audio file as mp3.

2.2 Choosing precise data points for each tool

Once you have a list of tools to give your blockchain AI Agent interesting capabilities, you can map exactly what each tool will use for data acquisition.

For example, for tools relying on API calls, you can create a spreadsheet for each tool where all useful endpoints are listed along with the outputs they give and the use cases each will allow. This will help you being exhaustive and making full use of the accessible data, to give rich features to your AI Agent.

As an example, here are the endpoints of the Taptools API used for our Cardano Native Token Analyzer tool :

EndpointDescription
/token/holdersget the total number of holders for a specific token
/token/linksget a specific token’s social links, if they have been provided to TapTools
/token/mcapget a specific token’s supply and market cap information
/token/poolsget a specific token’s active liquidity pools
/token/prices/chgget a specific token’s price percent change over various timeframes
/token/trading/statsget aggregated trading stats for a particular token
/token/ohlcvget a specific token’s trended (open, high, low, close, volume) price data 
/token/top/mcapget tokens with top market cap in a descending order
/integration/assetget details of a given token by its address
/token/quoteget price of ada in quote currency
/token/holders/topget the top holders of a token

This variety of data allows the Cardano AI Agent to trigger only one tool when asked about a particular token while being able to answer various questions and provide extensive analysis.

Once you have defined your blockchain AI Agent’s tools, you can build them carefully following the guidance below to make them optimally useful.

Part 3 : Building useful tools

AI Agent tools have keywords to call them, configurations to set some features, inputs/parameters to accomplish specific tasks, and all kind of possible outputs.

To make them maximally useful, tools must :

  • have easy to use inputs
  • have explicit outputs
  • be efficient : eventually have some optional features
  • explore useful multimodal outputs
  • have guided outputs interpretation
  • have granular error handling

For the tools of a blockchain AI Agent, this can translate to : making sure sorting parameters and timeframes are easy to use when asking about NFTs or tokens trading activities, making it clear for the LLM what each data like holdings, volumes, sales, etc. is about, have some deeper and more compute heavy analysis optional (like holders distribution analysis), allowing for graphs of trading activity and portfolio positions or even creating podcasts about tokens, adding some warnings and considerations over the specifics of some data points like supplies and top holders which might include special wallets, and more.

Below you will now explore these different aspects with code examples coming from our Taptools Cardano tools. You can also read the step-by-step tutorial explaining all the complete code of one of these tools (portfolio tracker tutorial).

Additionally, Shinkai allows effortless building of tools and AI Agents thanks to its integrated AI assistance. You can build tools just by prompting a LLM. With Shinkai, you don’t have to wrestle with library dependencies or handle manual deployments, everything is automated and user-friendly. In this video we build in just a few minutes a working prototype of the Cardano portfolio tracker tool data retrieval feature, by just giving some instructions and passing some documentation snippets about the API used.

To better use this AI assisted tool creation and be able to manually improve it, below you will learn in details about the tool design considerations mentioned above, and see concrete code implementations.

3.1 Easy to use inputs

Inputs should be well defined and distinct. For example, users are likely to ask questions to a blockchain AI Agent over different timeframes. But a tool with too many timeframe inputs might confuse the LLM of the AI Agent, as well as timeframe names being too similar.

In our NFT market stats tool, we use 3 different timeframe inputs for different data, and we make sure their names are explicit and distinct :

class INPUTS:
    timframe_for_NFT_market_stats: Optional[str] = "30d"
    timeframe_for_top_NFTs_by_volume: Optional[str] = "24h"
    timeframe_for_NFT_volume_trended: Optional[str] = "30d"

To make it as user-friendly as possible, it’s better to set default timeframe values to what could be the most interesting or most often used timeframes, so that users don’t need to include any timeframe information in their prompts if they are happy with the default value. Alternatively the default value of any input could also be choosable as a configuration, so that users can define their own.

All inputs should have explicit names, not only for the LLM to be able to match elements of the users’ prompts with the corresponding inputs to use, but also for the users to easily know what parameters the tool has, and so to know what to include in their prompts.

In our TapTools tools, the API endpoints have parameters with generic names. So we create inputs with more explicit names and pass them as parameters for the API calls.

Below is an example for the ranking method of today top NFTs. The parameter for the endpoints is simply ‘ranking’, but we create a more explicit input :

class INPUTS:
    Ranking_method_for_today_top_NFTs: Optional[str] = "volume"

We then use this value in the corresponding endpoint call.

async def run(config: CONFIG, inputs: INPUTS) -> OUTPUT:
    api_base_url = "https://openapi.taptools.io/api/v1"nft_market_stats_url = f"{api_base_url}/nft/market/stats/extended?timeframe={inputs.timframe_for_NFT_market_stats}"
    top_nfts_url = f"{api_base_url}/nft/top/timeframe?ranking={inputs.Ranking_method_for_today_top_NFTs}&items={inputs.Number_of_today_top_NFTs_to_retrieve}"

3.2 Explicit outputs

The way AI Agent tools work is that their outputs are passed as context to the LLM behind the AI Agent. It is content added to the users’ prompts. For a blockchain AI Agent we need to make sure all data coming from the tools are clearly understandable and non ambiguous. This means naming explicitly all elements of the outputs, adding units (in some cases dynamically), making % obvious, etc. Below are some examples.

For example, in the portfolio tracker tool, we create an output ‘portfolio_positions’ as a dictionary, to then edit its components :

class OUTPUT:
    portfolio_positions: Dict[str, Any]

We rename all the keys before including them in that output :

    # Fetch portfolio positions
    response = get(portfolio_positions_url, headers=headers)
    
    output = OUTPUT()
    
    if response.status_code == 200:
        data = response.json()
        
        # Rename keys
        data["Number_of_Fungible_Tokens"] = data.pop("numFTs", 0)
        data["Number_of_NFT_collections"] = data.pop("numNFTs", 0)
        data["Fungible_Tokens_positions"] = data.pop("positionsFt", [])
        data["NFTs_positions"] = data.pop("positionsNft", [])
        data["Liquidity_Provider_Positions"] = data.pop("positionsLp", [])

We improve the formatting of some data coming as percentages. The API endpoints deliver % as number with many decimals. To make it clearer we multiply by 100, round to 2 decimals, add ’%’ symbol, and also a ’+’ symbol for positive percentages.

        # Modify percentages and calculate quote currency values
        for position in data["Fungible_Tokens_positions"]:
            for timeframe in ["24h", "7d", "30d"]:
                if timeframe in position:
                    change_value = position[timeframe]
                    sign = "+" if change_value >= 0 else ""
                    position[timeframe] = f"{sign}{round(change_value * 100, 2)} %"

Rounding token prices can also require some care to be meaningfully presented, as they can be from in the thousands to incredibly small decimal numbers below 1. Below we format to 2 decimals if price is above 1, or to 10 decimals max with up to 3 non-zero numbers if below 1.

def format_price(price: float) -> str:
    """Format price based on its value."""
    if price >= 1:
        return f"{price:.2f}"
    else:
        price_str = f"{price:.10f}".rstrip('0')
        decimal_idx = price_str.index('.')
        for i in range(decimal_idx + 1, len(price_str)):
            if price_str[i] != '0':
                end_idx = min(i + 3, len(price_str))
                return price_str[:end_idx]
        return price_str

Below we dynamically name the outputs of a token price, fdv and market cap in quote currency, and add that quote currency, to make it super clear.

    token_asset_data.update({
        f"{inputs.ticker}price_in_{config.quote_currency}": f"{format_price(ticker_price_in_quote)} {config.quote_currency}" if ticker_price_in_quote else None,
        f"{inputs.ticker}fully_dilluted_valuation_in_{config.quote_currency}": f"{round(ticker_fdv_in_quote)} {config.quote_currency}" if ticker_fdv_in_quote else None,
        f"{inputs.ticker}market_cap_in_{config.quote_currency}": f"{round(ticker_mcap_in_quote)} {config.quote_currency}
        ...

You should also remove from the output some data coming from the data sources but that you’re not interested in. This will improve speed and lower the cost of the LLM inference. Below, we remove 2 data points from the output of our portfolio tracker tool (unit and fingerprint of the token) :

            position.pop("unit", None)
            position.pop("fingerprint", None)

3.3 Exploring multimodal outputs

As they use tools, blockchain AI Agents are not limited to text format as outputs. Some prices evolution or holdings data might be better shown on a graph. Some token analysis might be usefully presented in a pdf report. Some information might be more efficient to listen to instead of reading. Shinkai tools are highly composable, and you can easily add capabilities by integrating an existing tool to your blockchain tool. A basic example would be integrating a text to speech tool for audio output.

Our Cardano Native Token Analyzer tool is able to generate a trading chart, a Lorenz curve graph for holdings distribution analysis, a full pdf report, and a few minutes long podcast about a token.

Below is the code where we generate a podcast script for a token analysis. It uses the output ‘token_data’ which contains all the data gathered by the tool for a given token, as well as the config ‘podcast_script_instructions’, and the LLM prompt processor tool :

    # Podcast Script Generation
    if config.generate_podcast_script.lower() == "yes":
        try:
            podcast_input = {
                "format": "text",
                "prompt": (
                    "You are the writer of the podcast Tappy Chats, which is about detailed and insightful Cardano blockchain data."
                    "Using all of the information from the output token_data below, generate a podcast script according to the instructions below."
                    "Make sure to talk about every section of token_data."
                    "When analyzing value, only use Ada or USD or Euro value, not the value in the amount of the token."
                    "<information from output token_data>"
                    f"{token_data}"
                    "</information from output token_data>"
                    "<instructions>"
                    f"{config.podcast_script_instructions}"
                    "</instructions>"
                    "<format instructions>"
                    "Handle numbers gracefully. If you throw too many numbers at the listeners you can lose them. So pick relevant numbers."
                    "Make sure to round numbers to make them clearer to hear and understand. For tiny numbers with many zeros below zero, you can use 'less than a cent', or 'much less than a cent'."
                    "The text must flow. Do not use any formatting, but just plain text. The text should just be one flowing paragraph."
                    "</format instructions>"
                )
            }
            podcast_script = (await shinkai_llm_prompt_processor(podcast_input)).get("message")
        except Exception as err:
            error_messages.append(f"Podcast script generation failed: {err}")
    else:
        status["podcast_script"] = "Skipped (set to 'no')"

The podcast_script output is then passed to a text to speech tool.

    # Audio Generation
    if config.generate_audio.lower() == "yes":
        if podcast_script:
            try:
                home_path = await get_home_path()
                audio_file_name = f"tappy_chats_{inputs.ticker}.mp3"
                # Call the integrated eleven_labs_text_to_speech function
                audio_generation_result = await eleven_labs_text_to_speech(
                    {"text": podcast_script, "fileName": audio_file_name},
                    {"ELEVENLABS_API_KEY": config.ELEVENLABS_API_KEY, "voice": config.voice}  # Pass config
                )
                audio_file_path = audio_generation_result['audio_file'] # Use the returned path
            except Exception as err:
                error_messages.append(f"Audio generation failed: {err}")
        else:
            error_messages.append("Cannot generate audio: podcast script missing.")
    else:
        status["audio_generation"] = "Skipped (set to 'no')"

3.4 Be efficient, have some optional features

The user might not be interested in all the capabilities of a given tool of the blockchain AI Agent each time it interacts with it. Maybe sometimes the user just wants to do a quick check on a specific data points. So while each tool can offer different features, it is worth it to think about which ones would be used often, and which ones could be optional. You then must decide which options should be activated in configuration, and which ones should be activated using inputs (allowing users to activate them within their prompts).

Concretly, optional features allow a tool and therefore an AI Agent to be :

  • easier to use :

In configuration the user activates / deactivates the desired options, and don’t have to include such information in their prompts each time. And if an optional feature must be activated through an input, setting the default to the most often used also allows users to not include that option information in their prompts all the time.

  • faster :

The tool is not processing something the user is not actually using.

  • cheaper :

The inference cost is lower because the LLM of the AI Agent is processing smaller information.

Below is the configuration of optional outputs in our Cardano Native Token Analyzer tool :

class CONFIG:
    show_trading_chart: str = "no"  # "yes" or "no"
    create_pdf_report: str = "no"   # "yes" or "no"
    generate_llm_report: str = "no" # "yes" or "no"
    generate_podcast_script: str = "no" # "yes" or "no"
    analyze_holder_distribution: str = "no" # "yes" or "no"
    generate_audio: str = "no" # "yes" or "no"
    show_llm_report_content: str = "no" # "yes" or "no"

And here is for example the optional triggering of the holders distribution analysis, with 2 different error messages in case of analysis failure or simply deactivated option :

    # Optional Features
    # Holder Distribution Analysis
    if config.analyze_holder_distribution.lower() == "yes":
        try:
            holder_analysis_data, holder_graph_path = await analyze_holders(unit, inputs.ticker, taptools_headers, endpoints, ticker_price_in_ada, ticker_price_in_quote, config.quote_currency)
            token_data[f"{inputs.ticker}_holder_distribution"] = holder_analysis_data
        except Exception as e:
            error_messages.append(f"Holder distribution analysis failed: {e}")
    else:
        status["holder_distribution_analysis"] = "Skipped (set to 'no')"

To note, optional outputs can rely on other optional outputs. You should therefore make sure the tool’s conditional logic is correct, and has good (error) messaging.

A tool should not globally fail if some options are deactivated. A tool should indicate if some options are missing for a certain feature to be executed.

Here is an example of the Cardano Native Token Analyzer tool outputs clearly showing the state of each optional feature :

  "audio_file_path": null,
  "chart_file_path": "C:\\Users\\gille\\AppData\\Roaming\\com.shinkai.desktop\\node_storage\\tools_storage\\app-id-1741695406055\\home\\HOSKY_chart.png",
  "error_message": null,
  "holder_graph_path": null,
  "llm_report": null,
  "pdf_file_path": null,
  "podcast_script": null,
  "status": {
    "audio_generation": "Skipped (set to 'no')",
    "holder_distribution_analysis": "Skipped (set to 'no')",
    "llm_report": "Skipped (set to 'no')",
    "pdf_report": "Skipped (set to 'no')",
    "podcast_script": "Skipped (set to 'no')"
  },

And here we create a function to validate the configuration, and use it to stop the tool quickly if there are issues.

def validate_config(config: CONFIG) -> List[str]:
    """
    Validate configuration flags and dependencies.
    Returns a list of error messages if any issues are found.
    """
    errors = []
    flags = [
        "show_trading_chart", "create_pdf_report", "generate_llm_report",
        "generate_podcast_script", "generate_audio", "analyze_holder_distribution"
    ]
    for flag in flags:
        value = getattr(config, flag)
        if value.lower() not in ["yes", "no"]:
            errors.append(f"Invalid config value for {flag}: {value}. Must be 'yes' or 'no'.")

    if config.generate_audio.lower() == "yes" and config.generate_podcast_script.lower() != "yes":
        errors.append("Cannot generate audio without generating podcast script. Set generate_podcast_script to 'yes'.")
    return errors
    # Validate configuration and fail fast if there are issues
    config_errors = validate_config(config)
    if config_errors:
        return OUTPUT(error_message="; ".join(config_errors))

3.5 Guided outputs interpretation

Blockchain data might need some extra explanation, context, note, consideration or even warning when given to the users.

To make sure your blockchain AI Agent delivers well informative data, you can hardcode some notes in the outputs of its tools. They will be included in the context window when the LLM of the AI Agent uses the tools to answer the users’ prompts, and will ensure the data is introduced with some precisions you care about.

Below are examples of notes added to some metrics in our Cardano Native Token Analyzer tool.

In the token distribution analysis, we inform users that raw top on-chain addresses analysis might include special wallets still belonging to the project, and we explain more about the small holder ratio.

    return {
        f"{ticker}_onchain_address_concentration_top_10": f"{concentration_top_10:.2f} % of the {ticker} token total supply is held by the top 10 on-chain addresses (WARNING: this might include special wallets like project wallets of non-circulating supply, liquidity pools on DEXs, etc.).",
        f"{ticker}_small_onchain_address_ratio": f"{small_holder_ratio:.2f} % of the on-chain addresses holding {ticker} hold less than {CONFIG.small_holder_ratio_threshold}% of the total supply. This metric shows how much of the community consists of small retail investors versus big players, reflecting grassroots support. (WARNING: this metric could be affected by addresses with residual amounts.)",
        ...

3.6 Granular error handling

The tools should record eventual errors separately for each of their features, and not globally fail if only one or part of the features fail. Error messages should be clear and informative, and included in the outputs so that the AI Agent can well inform the users about some problems it possibly encountered.

If using an API, make sure the tool accesses and outputs the original detailed error message of the endpoint :

    # Fetch the price of ADA in the specified quote currency
    quote_response = get(token_quote_url, headers=headers)
    if quote_response.status_code != 200:
        output = OUTPUT()
        try:
            error_details = quote_response.json()
            error_message = error_details.get("message", "An error occurred")
        except json.JSONDecodeError:
            error_message = quote_response.text
        output.error = f"Error fetching ADA price: {error_message} (Status Code: {quote_response.status_code})"
        return output

You can also create error_message and status outputs to track errors and different states.

class OUTPUT:
    error_message: Optional[str]
    status: Optional[Dict[str, str]] 
    # Initialize error messages and status tracking
    error_messages = []
    status = {}

And then you add eventual errors and status one by one :

        except Exception as e:
            error_messages.append(f"Holder distribution analysis failed: {e}")
    else:
        status["holder_distribution_analysis"] = "Skipped (set to 'no')"

And you output them :

    return OUTPUT(
        ...
        error_message="; ".join(error_messages) if error_messages else None,
        status=status if status else None
    )

Part 4 : Setting up the AI Agent

Please refer to the corresponding section of the documentation to learn how to create AI Agents in Shinkai.

For your blockchain AI Agent, you would :

  • activate all your related tools
  • provide an adequate system prompt
  • pick a performant LLM able to interpret complex outputs (e.g. gpt_4o, shinkai_free_trial). You can also choose a LLM with vision task capabilities, for example if you want your blockchain AI Agent to analyse trading charts.
  • look for interesting knowledge to add to the AI Agent

Here is an example system prompt for our Cardano AI Agent, that includes the following :

  • AI Agent personality
  • rules for tools selection
  • addition of Not Financial Advice warning

We use tags in the system prompt to separate elements and make things clear for the LLM.

<Personality>
You are Tappy, a polite, friendly, engaging and data oriented cypherpunk penguin.
You are a specialist of the Cardano blockchain ecosystem, including NFTs, Native Tokens, stablecoins, synthetics, bridged assets, and portfolio tracking.
</Personality>

<Task>
Respond with accuracy to the user's message using the most appropriate tool you have access to, and the data from its output.
</Task>

<Instructions>
Be accurate, informative and clear.
You always use a tool before replying to a user.
Use one tool and only one tool at a time.
Use the most adequate tool according to the user’s message.
You always add a short friendly reminder that your answer is not a financial advice.
You have access to some knowledge document, use them if needed but don’t mention you are using it. Just use the information in your reply.
</Instructions>

<Context>
You have access to 6 tools.
The tool "Taptools Cardano NFT Market Stats Aggregator" is about market data for NFTs on Cardano.
The tool "Taptools Cardano Depeg Scanner" is about stablecoins, synthetics, bridged coins or coins represented on Cardano (like Bitcoin and Ethereum), and peg or depeg scanning and alerts.
The tool "Taptools Cardano Native Token Analyzer" is about analyzing a Cardano native token, using its ticker.
The tool "Taptools Cardano Portfolio Tracker" is about analyzing the portfolio of a particular Cardano wallet address.
The tool "Taptools Cardano Native Token Trading Chart Generator" is about creating a chart for a given Cardano Native Token ticker.
The tool "Taptools Cardano Native Tokens Market Stats Aggregator" is about market data for Native Tokens, with ranking by mcap or fdv, by liquidity, and by volume.
</Context>

Additionally, we gave our Cardano AI Agent some additional knowledge in the form of 2 .pdf files : a trading chart analysis course, and a near exhaustive list of all Cardano projects (extracted from CardanoCube).

Part 5 : Using the blockchain AI Agent

5.1 Prompting well

To use your blockchain AI Agent well, have a look at the tools it accesses.

  • The tool description lets you know its features.
  • The configuration shows some settings and options.
  • The code and the inputs class show you what to include in your prompts to use the features of the tool.
  • The metadata can show all the possible values for a given input.

Below are some prompt examples for our Cardano AI Agent. They demonstrate how to trigger tools, how to include inputs in prompts, how to ask for particular information, how to ask for a specific formatting of the data, and how easy it is to interact with the blockchain AI Agent and the data it can access. Notice how each prompt includes somehow some keywords to help triggering the appropriate tool, and clearly states inputs when needed.

These prompts are from the preview screenshots of our Taptools Cardano tools available in the Shinkai AI Store. You can see most of our Cardano AI Agent answers on these screenshots on their respective pages. Some prompts and answers can also be viewed in the demo video.

Here we simply trigger the trading chart generator, giving clearly the values to use as inputs (ticker, candle_interval, num_intervals) :

Plot ticker IAG, 1d candles, 365 intervals

Here we trigger the depeg scanner tool, mentioning “bridged” as it is one of the keywords for the tool :

How many BTC are bridged to Cardano ?

Here we ask for data shown in table format :

Show me a table of USD stablecoins in Cardano.

Here we ask for a precise information coming from the depeg scanner tool :

Show me the peg status of all BTC related coins on Cardano.

Here in 1 prompt we ask for 2 different NFT rankings. The inputs used by this prompt are from the NFT market stats aggregator and are ‘Number_of_today_top_NFTs_to_retrieve’, ‘Ranking_method_for_today_top_NFTs’, ‘timeframe_for_top_NFTs_by_volume’, ‘Number_of_NFTs_per_page_for_top_NFTs_by_volume’.

Show me the top 20 NFTs by volume over the last 30d.
And rank today top NFTs by gainers, show top 15.

Here we ask for a precise data (links) accompanied by more open data from an output, according to an input (all time).

Give me the links to check the all time top NFT collections by volume, show me their names and some quick stats too.

Here we give both some precise inputs to use the portfolio tracker tool, and some instructions to comment the portfolio :

quote_currency : EUR
address : addrxxxxxxxxxxxxx
Analyse exhaustively the portfolio in great details, and then generate the text for a long and funny description of that wallet, making fun of the holder. For example, look into NFTs with ridiculously low value, into dust tokens, into poorly performing positions. Make sure to name the positions (name of the LP, of the ticker, of the NFT collection).

Here we ask to focus on a part of the output of the token analyzer tool, and to also include some more information :

analyze ticker FLDT, for trading chart use 1d intervals, and 365 intervals,
focus your analysis on the holdings distribution,
and show me the links where I can know more about the project

Here we give a rather open task, but with some format for the answer. This will use all the default inputs values as none is given :

write a full report on the current Cardano Native Tokens market

5.2 Deploying the AI Agent on different platforms

The Shinkai AI Store already has some tools to integrate with emails, X/Twitter and Telegram. You can then use some Scheduled Tasks to execute these tools regularly.

You can read this email integration tutorial. You can watch this X/Twitter integration video. And you can find an example of Telegram bot deployment in the Cardano data AI Agent demo video here and here, as well as a detailed step-by-step guide in this NASA Agent demo.

Note that you can easily fork these tools and adapt the code to your needs. For example the default Telegram bot tool in the Shinkai AI Store answers to all new messages at once, but you can easily make it reply to each message one by one (as seen in the video demos).