Skip to Content

Introduction

Orien AI is a trading agent platform built for execution, supervision, and extension.

It is useful to think of the product in three parts:

  • a shared trading core for data, execution, and monitoring
  • a pipeline runtime for structured trading agents
  • two extension ecosystems, MCP and Skill, for tools and strategies

Overview

Most AI trading products begin at the interface layer. Orien AI begins at the execution layer.

That design choice changes the product in three important ways:

  • the same execution core can serve chat, automation, and hosted agent runtimes
  • strategy logic can be packaged, replayed, and monitored as part of a pipeline
  • the platform can support markets for both strategies and AI tools

This is why Orien AI is better understood as a trading system with agent interfaces, rather than a chat product with trading features.

Who the product is for

Traders and operators

Use Orien AI when you need one operating surface for paper trading, backtesting, and live deployment.

In practice, that means:

  • reviewing agent decisions cycle by cycle
  • comparing backtest assumptions against live behavior
  • applying the same risk model before execution reaches the market

Strategy developers

Use Orien AI when you want to turn strategy logic into a reusable trading product.

The Skill ecosystem lets you:

  • implement deterministic trading logic in code
  • connect that logic to runtime phases such as signal generation, execution, and post-trade handling
  • publish Skills to the market as reusable strategy packages
  • monetize paid Skills through distribution to operators and teams

AI tool builders

Use Orien AI when you want to provide tool access to AI-assisted trading workflows.

The MCP ecosystem lets you:

  • extend AI Chat with MCP servers
  • ship reusable tools that can be installed and managed by operators
  • publish paid MCP products and earn revenue from installs or subscriptions

What makes the product different

The main difference is where Orien AI places autonomy.

In Orien AI, autonomy does not live inside one opaque prompt loop. It lives inside a runtime with defined phases, shared execution primitives, and explicit monitoring.

That leads to a different operational posture:

  • prompts can assist reasoning, but they do not replace the runtime contract
  • risk checks are part of the flow, not an add-on after a decision is made
  • backtesting is treated as part of deployment hygiene, not a separate research artifact

Ecosystem attachment points

MCP and Skill extend different parts of the platform. They do not compete for the same role.

MCP ecosystem
  • Extends AI Chat with tools
  • Publishes paid MCP servers
  • Supports install, versioning, and reuse
Shared trading coreAgent Trading

Both ecosystems converge on the same execution, monitoring, and risk model.

Skill ecosystem
  • Packages trading logic as Skills
  • Serves pipeline and integrated agents
  • Supports paid strategy distribution

Business model for builders

Both extension ecosystems support monetization.

For strategy developers:

  • Skills can be published as paid strategy packages
  • teams can distribute private or public paid strategy modules

For AI tool builders:

  • MCP servers can be sold as paid capabilities
  • users can install or subscribe to reusable tool products

This matters because the platform is designed not only to run agents, but also to support the developers who supply the strategy and tool layer.

Design principles

  • Execution before interface: reliability at the data, order, and wallet boundary comes first
  • Structured autonomy: pipeline agents rely on orchestration and explicit phases rather than free-form prompting
  • Shared operational model: paper trading, backtesting, and live deployment should not require different mental models
  • First-class ecosystems: Skills and MCP are part of the platform model, not side integrations