Training-free semantic search for the physical world.
Seek is an API that lets robots find things by name in unfamiliar spaces – without per-site training or custom detectors. Describe the target in natural language; the system scores what the robot sees, builds a semantic map and suggests where to go next.
Built for labs, warehouses, hospitals, and campuses where layouts and objects change over time.
Solid foundations for semantic navigation
Getting a robot to a single hard-coded goal is easy. Making it find things every day in changing layouts, with minimal re-engineering, is the real work.
What Seek does
Seek brings together semantic understanding, mapping, and exploration into a single API. The system builds a language-conditioned view of the environment on the fly and suggests where to go next based on meaning, not just occupancy.
- Understands targets described in natural language.
- Aggregates observations into a semantic map over time.
- Selects frontiers that are both unexplored and promising.
- Returns approach poses at a sensible standoff distance.
Where it fits
Seek drops into existing stacks as a semantic search layer. You keep your navigation, control, and safety logic; we provide "what to look for" and "where to go next".
Learning by doing
We develop Seek in close collaboration with early users across operations, robotics, and research. Each deployment feeds back into the product: better grading tools, clearer logs, and tighter interfaces.
- Recovery runs: "find pallet 18B" when scans disagree.
- Guided tasks: "walk to the visitor kiosk in atrium B".
- Inspection: search around assets that don't stay put.
Hardware-agnostic by design
Any platform that can localise and stream camera data can work with Seek.
- AMRs and AGVs in logistics and healthcare.
- Mobile bases with arms in labs and light industry.
- Quadrupeds and humanoids in complex indoor spaces.
- Drones with a stable pose estimate.
Seek: an API for training-free semantic search
Seek is our cloud API for find-by-name behaviours. Open a search session, stream observations, and receive waypoints and approach poses conditioned on natural language descriptions.
Your robot in four calls
Seek is designed to be simple to integrate. A typical loop calls
search_start, then search_next until verification,
and finally search_verify plus an optional
checkpoint.
- search_start – open a session, provide the target phrase, pose, and camera model.
- search_next – get the next waypoint that balances exploration and semantic likelihood.
- search_verify – confirm the target and receive a safe approach pose when it is visible.
- checkpoint – persist maps and traces to replay or resume a run.
Example flow
A basic integration couples Seek with your navigation stack. Pseudocode:
# Start a semantic search
session = seek.search_start({
"target": "cleaning cart for ward C",
"pose": robot.pose(),
"camera": rgbd.frame()
})
# Move through promising frontiers
while not session.done:
waypoint = seek.search_next(session)
nav.go_to(waypoint)
# Verify and stop at a sensible distance
result = seek.search_verify(session)
if result.found:
nav.go_to(result.approach_pose)
About SeekSense
Our aim is simple: give any robot the ability to find things by name safely, reliably, and from day one – without turning every deployment into a custom perception project.
What we build
Seek turns advances in vision–language navigation into a straightforward service. You describe what you care about – "spill kit in bay 4", "equipment trolley for theatre 2", "scanner cart 3" – and Seek suggests where the robot should go and when it is close enough.
- Image–text scoring to interpret your target phrases.
- On-the-fly mapping of observations over time.
- Language-conditioned heatmaps to highlight promising areas.
- Approach poses tuned for safe stopping distances.
Why now
Perception has improved dramatically, but deployments still get stuck on environment-specific training loops, vendor-locked stacks, and slow integrations at each new site. Seek makes "find-by-name" behaviour a shared capability rather than a one-off project.
- You keep your navigation, control, and safety layers.
- We provide the semantic search and waypoint generation.
- The same API works across different fleets and layouts.
Who we serve
- Learners & educators – run "find-by-name" labs in simulation or on small robots without touching model training.
- Researchers – structured navigation setups, seeded evaluations, and artefacts that are easier to reproduce.
- Developers & startups – environment-agnostic search in weeks instead of months.
- Integrators & platform teams – one semantic search API that scales across sites and fleets.
- Operations teams – faster recoveries and searchable evidence of what the robot saw and did.
How Seek works in practice
A typical deployment streams camera frames and poses to Seek. The system aggregates those into a semantic map, selects promising frontiers, and returns waypoints and approach poses you can pass to your existing navigation stack.
- Perceive & score – understand what is seen relative to the target.
- Explore with intent – move toward areas that are semantically likely.
- Verify & approach – confirm the target and stop at a safe distance.
- Checkpoint – capture artefacts for replay, grading, and demos.
Alpha programme: 5–10 teams who want training-free “find-by-name” on their robot.
We’re selecting a small group of partners to run SeekSense in real environments and help shape the product.
- Must have a mobile robot already moving in a real environment.
- Willing to jump on 1–2 calls to integrate.
- In warehouses, hospitals, labs, or campuses.
We only email when there is something useful to share. No spam; unsubscribe anytime.
Prefer email? Contact us at team@seeksense-ai.com.
Frequently Asked Questions
Common questions about SeekSense and how it works.