Software Guide16 min read

Conduct Vision Social Interaction Software Manual

This manual covers the installation, configuration, and operation of Conduct Vision Social Interaction Software, which is used for conducting behavioral experiments with animals in various maze types and analyzing social interaction behaviors through video tracking and data analysis.

Updated Feb 7, 2026
13 sections

Get This Product

ConductVision: AI Driven Video Tracking

ConductVision: AI Driven Video Tracking

Request a Quote

Get Quote
Video Tracking Cameras

Video Tracking Cameras

Request a Quote

Get Quote
Sociability Chamber

Sociability Chamber

Request a Quote

Get Quote

Also Available on Amazon

As an Amazon Associate, ConductScience earns from qualifying purchases.

Conduct Vision Software Manual

Contents

  1. Install and Run Application
    • 1.1 Software Installation
  2. Maze Types
  3. Video Service Connection
  4. Experiment Configuration
    • 4.1 New Experiment
    • 4.2 Configure Parameters
    • 4.3 Rectangle Drawing
    • 4.4 Ruler
  5. Experiment Run
    • 5.1 Run Experiment
    • 5.2 Tracking Data
    • 5.3 Result Tables
    • 5.4 Tips of Grid Data View
    • 5.5 Video Tracking Replay
  6. Social Interaction Behavior Analysis
    • 6.1 Key Points
    • 6.2 Social Interaction Definitions
    • 6.3 Social Interaction Analysis
    • 6.4 Result Summary
    • 6.5 Heat Map
  7. Marker-less Multi-Animal Tracking
    • 7.1 Marker-less Tracking Technologies
    • 7.2 Step 1 Tracking Data
    • 7.3 Final Tracking Data
    • 7.4 Tracking Accuracy
  8. Video Batch Processing (to come)
  9. Animal
  10. Export Data
  11. Troubleshooting
  12. Camera Setup

1. Install and Run Application

1.1 Software Installation

The executable is ConductVisionUI\bin\ConductVisionUI. Double click on the executable. The application appears as below:

The demo version is valid for one month.

2. Maze Types

Click on the maze type field to view or select a maze type. You can switch to another maze without having to restart the application.

For the social interaction package, there is one maze type: SocialOpen

The ConductVision software supports maze types defined in the packages: PackMaze, PackSocial and PackAdvanced

  • PackMaze: Barnes, Light-Dark, Novel Object, Open Field, OpenField4 (Open Field Set of 4), Elevated Plus, Radial Arm, T-maze, Y-maze, Sociability, Water Maze, U-Maze (User defined maze), Fear Conditioning
  • PackSocial: Social Open and Social Cage (to come)
  • PackFish: Zebrafish
  • PackAdvanced: Automated Radial Maze, Active Place Avoidance, Looming Visual and many others

3. Video Service Connection

To use AI based video tracking service, the user needs to start the service stream. When the service stream is established, the status field shows text "Started" and color becomes green. If there is a need to stop the stream because of an exception, click on the stop button and then start again.

When the application starts, it is in the stop state.

The stream uses local machine. It is usually available for users. If the port is blocked, please contact your system administrator to open the port.

4. Experiment Configuration

The application provides a user interface to configure and execute experiments (protocols) and view execution results.

4.1 New Experiment

To create an experiment protocol, click on "SocialOpen Protocols" on the left panel. A menu item New appears as shown below:

Click on New menu item, a "New Experiment" window appears as shown below. Fill in an experiment name and click on Save button.

Once the experiment is created, it appears in the folder "SocialOpen Protocols" as shown below:

An experiment can be deleted by right clicking on the experiment. The current active protocol is indicated in the left bottom of the screen.

4.2 Configure Parameters

To configure an experiment, double click on the experiment under the folder "SocialOpen Protocols".

4.2.1 Protocol Parameters

  • Trials (1-30) - represents the number of trials to continuously run in the session. The trial is set to 1 for a single SocialOpen single video or single camera. Multiple trials will be used for the batch processing of videos, each trial representing a video.
  • Trial Durs(s) - represents the trial duration in seconds
  • Auto Trial – If selected, the next trial will automatically start after the completion of a trial. If not selected, a dialog will appear asking the user to place the animal in the starting position before starting.
  • Video Source: Video file - to track a video file, the user needs to load a video file.
  • Video Source: camera – this is to track rodent in real-time
  • Preview – in live tracking mode, the user can preview the maze; stop preview will produce a thumbnail that can be used for drawing the tracking areas.
  • Track Part – the user can choose the track part, either head or body. Some mazes only allow head tracking. This is not used in the social open as 11 animal key points are tracked.
  • Number of Animals – the social interaction supports up to 4 animals, so the selection ranges from 2 to 4.

4.3 Rectangle Drawing

4.3.1 Video Thumbnail

Rectangular area drawing is required for SocialOpen. When a video is selected or a camera is previewed, a video thumbnail is generated and shown on the Video Tracking panel. For example, when a video file is selected, the video thumbnail is shown as shown below.

4.3.2 Drawing Rectangle

  • Click Button in the Social Open Drawing group to enable the drawing mode.
  • On the maze, select the point on the left upper corner and drag to the right bottom inside the maze
  • After the drawing, click Save button.
  • The rectangle size should be just large enough to cover the maze and animal.

4.4 Ruler

4.4.1 Distance Measurement

In software, the camera image distance is defined as pixels. We are interested in reporting distance meters like cm. So, we need to know the pixels distance on screen maps to meter distance cm.

4.4.2 Ruler Drawing

  • Click on the Show Ruler button, and a line appears on the maze
  • Draw the ends of the line to two end points of the maze
  • Enter the real distance in the field. For example, the dimension of the standard open field maze from Maze Engineers is 40 x 40 cm so you enter 40.
  • Click Save button to save the data.
  • Click on the Ruler button again to exit the ruler drawing mode.

5. Experiment Run

To execute an experiment, select the protocol under "Protocols" on the left navigation panel and double click on it. Make sure the service stream is connected.

5.1 Run Experiment

Enter a name for the experiment run and click Start button. This will set system in the running state.

When the start button is clicked, the state of button becomes Stop. The user can click on a stop button to stop the session run.

The experiment will stop when total time reaches, or all trials are completed, or the user stops it.

In the Video Tracking panel, the animal moves are tracked.

In the Tracking Details panel, the track details are recorded. The data grid only shows the latest trial. When a trial is completed, the tracking data is saved.

5.2 Tracking Data

The tracking data on the Protocol tab is organized in a tree structure. For example, if the number of animals is 3, for each frame, there are three rows, each for one animal and a parent row. To expand the data grid, double click with the left mouse button; to collapse the data grid, double click with the right mouse button.

5.3 Result Tables

Select the result from the left Experiment Result. The results are shown on the Result tab. There are five panels to show the results.

Replay Details: contains all tracking data of the protocol run.

Replay: allows user to replay the video at a user defined speed or frame by frame

Social Interactions: defines social interaction parameters and analyzes the social interaction behaviors.

Summary Result: Calculate and display the summary of social interaction behaviors.

Current Selection: display the protocol run start and end timestamps.

5.4 Tips of Grid Data View

In the application, the UI Tree Grids are used. To expand the tree data grid, double click with the left mouse button; to collapse the data grid, double click with the right mouse button.

5.5 Video Tracking Replay

5.5.1 Video Replay

The panel allows the user to replay the video at a user's preferred speed.

The panel can also let the user search for a certain frame and navigate forward and backward frame by frame.

The tracking key point data is highlighted along with the move of the frames.

This gives users the tool to review videos and closely examine animal behaviors.

5.5.2 Display Properties

The display colors, size and body part selections can be configured in the Properties dialog.

6. Social Interaction Behavior Analysis

6.1 Key Points

Conduct Vision is an AI based video tracking system. Key points are manually annotated on the animals' bodies to create a labeled dataset for training. A pre-trained model is then fine-tuned on this dataset to adapt to the specific animal species and experimental conditions. The model predicts the 2D coordinates of key points (body parts) in each frame by generating heatmaps, where each heatmap represents the likelihood of a specific key point being at each pixel location.

11 key points are chosen: nose, left ear, right ear, shoulder, spine1, spine2, left hindpaw, right hindpaw, tail base, tail mid, and tail end. Spine1 is chosen as the animal body.

6.2 Social Interaction Definitions

6.2.1 Social Interaction Definitions

The following social interactions are implemented:

  1. Affiliative Interactions: Grooming and huddling behaviors.
  2. Exploratory Social Interactions: Sniffing, following, and initial recognition behaviors
  3. Aggressive Interactions: Chasing and attacking behaviors

Select a social interaction type and click the Button to open the PDF file.

6.3 Social Interaction Analysis

6.3.1 Analysis Parameters and Run

Select the parameters and click the RunAll button to run the calculations. The results are in the tabs of Affiliative, Exploratory and Aggressive. The summary result is also calculated.

Save button will save the parameter, result summary and each behavior analysis (Affiliative, Exploratory and Aggressive).

When a result is selected from the Social Open Results, the saved parameters and result summary will be populated.

6.3.2 Affiliative Behavior Analysis

Pair: The animal pair. For two animals, the only pair is 1-2. For three animals, there are pairs (1-2, 1-3, 2-3, 1-2-3). For four animals, there are pairs (1-2, 1-3, 1-4, 2-3, 2-4, 3-4, 1-2-3, 1-2-4, 2-3-4, 1-2-3-4). For simplicity, the groups of more than two animals are also called pairs.

BodyToBodyCm: the distance in CM between bodies of two animals

IsHuddling: Indicator if the frame is a huddling frame that meets the huddling definition

StartFrom: the timestamp of the frame from the start of the tracking

RulerRatio: the ruler length in cm over ruler length in pixel

Users can adjust the parameters and click the button on this panel to view the result. The calculation result from the button on this panel will not be saved.

Huddling is identified by:

  • Close Proximity: The animal nose-to-nose distances are within a predefined distance threshold.
  • Low Movement Speed: The group remains relatively stationary.
  • Duration threshold: The behavior lasts for a minimum duration (e.g., >3 seconds).

Grooming is identified by:

  • Close Proximity: The animal nose-to-body distances are within a predefined distance threshold.
  • Duration threshold: The grooming persists for a minimum duration (e.g., > 1 second).

6.3.3 Exploratory Behavior Analysis

The table shows the result of exploratory behavior. It is similar to the Affiliative analysis.

Sniffing is identified by:

  • Nose proximity: Nose-to-nose or nose-to-body distances are within a predefined distance threshold.
  • Movement pattern: Small, quick head movements.
  • Duration threshold: Sniffing lasts between 0.5–3 seconds

Following is identified by:

  • Nose proximity: The nose of the follower stays within a threshold distance (e.g., < 5 cm) behind the leader.
  • Moving Direction: Both animals move in the same direction (based on body orientation vectors).
  • Duration threshold: The following persists for a minimum duration (e.g., > 1 second).

Initial Recognition is identified by:

  • Proximity: The subject rat moves within a certain threshold (e.g., 50 pixels) of the other rat's key points.
  • Head Orientation: The subject rat's nose or head is facing the other rat (e.g., within ±30° of direct gaze).
  • Pausing or Sniffing Behavior: If the rat slows down near the other rat, it may indicate recognition.
  • Movement Change: Sudden reduction in velocity (from fast to slow) could indicate recognition.

6.3.4 Aggressive Behavior Analysis

The table shows the result of Aggressive behavior. It is similar to the Affiliative analysis.

Chaser and Attacker can be 1 or 2. If it is both 1 and 2, the field value becomes 12. This indicates the parameter thresholds are too large and need to be adjusted.

Chasing is identified by:

  • Proximity Reduction: The aggressor moves toward the target, decreasing their distance.
  • Speed Increase: The aggressor accelerates to match the target's movement.
  • Directional Alignment: The aggressor's nose and body center point toward the target.
  • Sustained Pursuit: The movement lasts for a minimum duration (e.g., >0.5 seconds).

Attacking is identified by:

  • Sudden Approach: Rapid reduction in distance (< 2 cm) between the aggressor and target.
  • High-Speed Movement: Fast acceleration of the attacking mouse.
  • Head & Body Orientation: Nose and head aligned toward the recipient.
  • Physical Contact: Striking, gripping, or biting movements.
  • Defensive Reaction: The recipient may escape, rear, or freeze.

6.3.5 Animal Pairs

The software analyzes multiple animals of two, three and four.

For two animals, the only pair is 1-2. For three animals, there are pairs (1-2, 1-3, 2-3, 1-2-3). For four animals, there are pairs (1-2, 1-3, 1-4, 2-3, 2-4, 3-4, 1-2-3, 1-2-4, 2-3-4, 1-2-3-4). For simplicity, the groups of more than two animals are also called pairs.

6.4 Result Summary

If the parameters are adjusted and analysis is recalculated, the Calc button will recalculate the result summary, but the result will not be automatically saved.

6.5 Heat Map

For multi-animal interaction heatmap, calculate the spatial density where animals come close.

Select an animal pair, a specific body part, the distance between body parts, and a time period. Click the Calc button to generate a heat map.

7. Marker-less Multi-Animal Tracking

7.1 Marker-less Tracking Technologies

In multi-animal tracking scenarios, individual animals are identified and tracked independently, even in cases of overlapping or occlusions. This is achieved by incorporating identity tracking mechanisms and enforcing spatial constraints to maintain consistent tracking across frames. There are two steps of data capture and processing.

First, animal key point estimation is performed using Fully Convolutional Networks (FCNs) based on deep neural network architectures such as YOLO (You Only Look Once) and ResNets (Residual Networks). Initially, key points are manually annotated on the animals' bodies to create a labeled dataset for training. A pre-trained model is then fine-tuned on this dataset to adapt to the specific animal species and experimental conditions. The model predicts the 2D coordinates of key points (body parts) in each frame by generating heatmaps, where each heatmap represents the likelihood of a specific key point being at each pixel location.

Secondly, to maintain consistent identities across frames, algorithms such as the Hungarian algorithm for bipartite graph matching or Kalman filtering are employed. These algorithms leverage spatial proximity, motion coherence, and temporal information to associate detected key points with their respective animals over time.

On the software, the tracking data presented on the protocol tab during the tracking is a result of step 1 so there are some cross points among animals. The step 1 tracking is displayed so that the user can view the progress of the video tracking. The final tracking data on the result tab is the result of the final (two steps) process so that the animals are identified consistently across frames. The two step results are used for social interaction analysis.

7.2 Step 1 Tracking Data

This is for users to view tracking progress.

7.3 Final Tracking Data

The final tracking data is consistent across frames and used for social interaction behavior analysis.

7.4 Tracking Accuracy

The trained model is based on fine-tuned datasets to adapt to the specific animal species and experimental conditions.

To create a trained model, key points are manually annotated on the animals' bodies to create a labeled dataset for training. The dataset is constructed using wide ranges of videos that have maze colors (white, black, grey and blue), animal colors (white and black) and light conditions (normal and IR).

It is normal that labs have different recording environments (mazes, animals and lighting conditions). If the tracking accuracy is low, we have recommendations:

  • Improve the video recording quality. Our sample videos can show the expected quality
  • The lab videos can be added to the trained model to achieve high tracking accuracy.

8. Video Batch Processing (to come)

Mouse tracking video batch processing involves analyzing multiple videos to extract mouse behavior measurements, which are often used in behavioral studies.

Organize your video files in folders with a clear naming convention. During the recording of multiple animals, the camera and maze should be positioned steadily. Make sure that each arm drawing covers the arm properly for all videos.

8.1 Video Batch Processing Configuration

8.2 Video Batch Processing Results

8.3 Key Steps

The key steps are summarized:

  • Create a protocol, select Run Batch and other parameters
  • Load video files and input animal names
  • Run video tracking like individual video file or live tracking
  • The result is in the result table like the individual video file or live tracking

9. Animal

The tab provides a place for users to enter animal information. The animal information is then applied to the results.

10. Export Data

  • Select a result
  • Click the Export button
  • The output file format is .csv

11. Troubleshooting

11.1 Log Display

The log content can be displayed on the screen.

11.2 Database Files and User Data

The database file contains the protocols and results. There is a separate folder containing database files and video data for each maze under the bin folder.

Demo version: the demo version is valid for one month from the download and no additional setup is required.

To generate a license, you will need to provide us with two machine names where Conduct Vision software is intended to run.

There are two ways to find the computer name:

Method 1: From Control Panel. It works for Windows 10 and Windows 11

Method 2: For Windows 11, from Start button → System → About

12. Camera Setup

If one of our provided cameras is used, no drivers or software is required for the camera to function.

  1. Disable any other cameras on the computer under "Device Manager"

  1. Plug the camera into the computer's USB port. You should hear a noise to indicate the computer has recognized the camera.
  2. Open the computer's camera app to confirm the camera's connection and adjust the camera's view.
  3. Either record subject trials in advance using your computer's camera app or open the ConductVision application for live streaming.

Was this article helpful?

Related Articles

Conduct Vision Fear Conditioning Software Manual

This manual covers the installation, configuration, and operation of ConductVision Fear Conditioning Software, a GPU-based application used to conduct fear conditioning experiments with maze types and video tracking capabilities. The documentation includes instructions for software setup, experiment configuration, stimulus protocol management, hardware adapter integration, and data analysis features.

Conduct Vision Fish Software Manual

This manual covers the installation, configuration, and operation of Conduct Vision Fish Software, which is used to run behavioral experiments on fish with features including maze selection, video tracking, social interaction analysis, and multi-animal tracking. The software supports multiple maze types (FishOpen, Y-Maze, T-Maze, Mirror Biting) and provides tools for experiment configuration, data analysis, and export.

Conduct Vision Homecage System Specification

This specification document describes the Conduct Vision Home Cage System, which includes 140 mouse cages housed in stainless steel racks with integrated NVIDIA Jetson Orin Nano-based monitoring devices, along with a server computer running Conduct Vision software for behavior analysis and optional cloud-based remote monitoring capabilities.

ConductVision Y-maze protocol

This document describes the ConductVision Y-maze protocol, which provides software analysis of spontaneous alternation behavior in rodents to assess spatial memory and exploratory behavior. The protocol details how to specify Y-maze arm entry sequences, calculate alternation results, and interpret alternation percentages as indicators of working memory and cognitive function.

ConductVision Catwalk Specification

This specification document describes the ConductVision Catwalk system, a gait analysis device consisting of a glass walkway platform with LED illumination and high-speed camera hardware that captures rodent paw contacts, paired with AI-powered software that automatically detects and analyzes spatial, temporal, kinetic, and coordination parameters of animal gait. The system measures various metrics including stride length, stance time, paw pressure, and limb coordination patterns.

Need more help?

Our support team is here to assist you