Welcome to STRANDS documentation!¶
This site contains the documentation for the software and data produced by the EU STRANDS Project. For more information on the scientific aims of the project, please see our IEEE RAM overview article or the STRANDS Project website.
The project created autonomous mobile robots which were successfully deployed for long periods in real user environments. In the process of this we created a great deal of open source software for AI and robotics applications. This software is all available via the STRANDS GitHub organisation. This site provides a single location where the documentation from across that organisation can be viewed. It is also the main location for software tutorials and guides for creating systems, and provides an entry point into using our software for new users.
Please note that a large amount of this site is automatically generated from our code and package documentation, so the structure is currently not perfect. Our scripts for automatically generating this site are available here.
Getting Started¶
If you wish to understand or reuse the full STRANDS system, you should follow the STRANDS system tutorial. If you want to set things up as fast as possible, see the quick start instructions. Both of these will leave you with a system which has ROS and STRANDS packages installed, and can run a simulation which uses some of the core STRANDS subsystems.
Core Subsystems¶
A STRANDS system is formed of many components which provide various pieces of functionality, ranging from navigation to user interaction. A list of all packages with a brief overview of their purpose can be found here. The following sections give a brief overview of some of the packages which form the core of the system.
STRANDS Executive¶
The STRANDS executive controls the execution of tasks requested by users or generated by the system itself, prioritising them using various metrics such as expected completion time, probability of successful completion, and so on. It provides facilities for both long-term task routines, task scheduling and task planning under uncertainty. There is a STRANDS Executive tutorial which covers the main parts of the system and an overview document.
Person Detection and Tracking¶
When operating in populated spaces it is crucial to be able to detect and track people. STRANDS produced an indoor multi-person tracker which fuses and tracks upper body detections and leg detections. We also have produced a wheelchair and walking aid detector.
3D Mapping and Vision¶
One of the major outputs of the project is a collection of systems for discovering and learning about objects in everyday environments. These are collected together into the STRANDS 3D Mapping collection, described here.
Semantic Object Maps (SOMa)¶
The outputs of person detection and 3D mapping are stored in our Semantic Object Map (SOMa) which captures the information the robot gathers over long durations in a central store which supports a range of visualisations and queries. This is described here. SOMa is backed by our integration of MongoDB into ROS: MongoDB Store.
Long-Term Data Processing (FreMEn and QSRLib)¶
After data is collected in SOMa our systems process it using various techniques. Major outputs of STRANDS include FreMen which provides frequency-based modelling for the temporal dimension of spatial representations, and QSRLib, a library for generating qualitative spatial relations from sensor data.
Datasets¶
You can find the datasets generated by the project here.
Documentation contents¶
- Participant Information
- [[Individual Computer Setup]]
- [[Working with the STRANDS robots]]
- Programme
- Photos / Social Media
- 1. Some Previous Steps
- 2. Launch Simulation
- 2. Create 2D Map
- 2. Navigate using Move_base:
- 2. Create Topological Map
- Robot vpn to connect from desktop pcs and laptops
- Overall setup
- Network setups
- Changes for LAMoR
- Using the Strands Navigation System
- Creating occupancy grid map
- Creating the Topological Map
- Local metric map / semantic map
- Description
- Doing a metric sweep (package
cloud_merge
) - Intermediate cloud calibration (package
calibrate_sweeps
) - Metarooms and dynamic clusters (package
semantic_map
) - Requesting dynamic clusters (package
object_manager
) - Accessing observation data online (package
semantic_map_publisher
) - Accessing data stored on the disk (package
metaroom_xml_parser
) - Data collected so far in Strands and available online
- Debugging / troubleshooting
- For an up-to-date tutorial please check out ‘https://github.com/strands-project/v4r_ros_wrappers/blob/master/Tutorial.md’
- Dependencies
- Installation
- Configuration options:
- Trouble shooting
- Recognition performance
- Trouble shooting
- Tutorial on Qualitative Spatial relations and human tracking
- Tasks
- References
- Vpn to the robot
- Rules and guidelines on how to use the STRANDS robots
- VPN to the robots
- Calibrate sweeps
- Package for building local metric maps
- cloud_merge_node
- do_sweeps.py
- Ekz public lib
- scitos_3d_mapping
- Start the system
- Data acquisition
- Calibrate sweep poses
- Meta-Rooms
- Reinitialize the Meta-Rooms
- Access invidual dynamic clusters
- semantic_map_publisher
- Accessing saved data
- Dependencies
- Starting the server node
- Launch file
- Triggering a learning session
- RViz montoring
- Debug mode
- SOMA Rois file format
- Limitations
- Package for parsing saved room observations
- Description
- Usage
- Running an example on PCDs
- Running and using the planning as a ROS node
- RViz
- Limitations
- Object manager
- Running
- Usage
- Testing
- Package for building metarooms and extracting dynamic clusters
- semantic_map_node
- Export sweeps from mongodb to the disk
- Import sweeps from the disk into mongodb
- Semantic map launcher
- Semantic map publisher
- Semantic map to 2d
- People Tracker
- Updating old bag files
- Logging package
- Updating old database entries
- Detector msg to pose array
- Ground plane estimation
- Human trajectory
- Index
- Mdl people tracker
- Odometry to motion matrix
- Upper body skeleton estimator
- Run
- Head orientation estimator
- For the G4S Y1 deployment
- Install
- Run
- Upper body detector
- Vision people logging
- Wheelchair detector
- For developers
- License
- Installation
- Tutorial
- Troubleshooting
- usage:
- params (see extended help output with -h):
- Test:
- Test params (NOTE THAT THESE ARE ROS PARAMETERS):
- References:
- Usage:
- params:
- Test:
- References:
- Technical Maintainer: [markus](https://github.com/edith-langer (Edith Langer, TU Wien) - langer@acin.tuwien.ac.at
- Contents
- 1. Installation Requirements:
- 2. Execution:
- usage:
- params (see extended help output with -h)::
- Test:
- Test params (NOTE THAT THESE ARE ROS PARAMETERS):
- Overview
- The V4R (Vision for Robotics) library
- Object modelling
- Object recognition
- Object tracker
- References