How To Set Up a Multi-User VR Eye Tracking Experiment in 15 minutes with SightLab VR

May 24, 2022

Sado Rabaudi

This tutorial will show you how you can create, run and analyze a multi-user eye tracking experiment in VR using SightLab VR. If you do not have an eye tracked headset, this can also be used with a standard VR headset (in that case analysis will just use head position).

Sightlab VR includes a template for setting up multi-user eye tracking experiments that can be modified to use your own environment model and fixation objects. These models can be from many sources. See the Vizard documentation for supported model types. One good source for getting 3D models is http://www.sketchfab.com (download as .glTF). rd’s Inspector.

For more information on Inspector see this page in the Vizard documentation.

Setting up and configuring your scene using Vizard’s Inspector

1. After acquiring a model (from Sketchfab or other sources) open Inspector (By double clicking the "build.py" file or in Vizard going to Tools- Inspector) and load your environment by choosing File-Open.

To check the size of your environment, click on the root node in the top left scene graph and look on the bottom right to see the size in meters. You can bring in a stand-in avatar (resources- avatar- vcc_male.cfg) to see where someone would start in the scene and how it looks to scale. To bring down the scale or move the starting point, select the root transform node (gear icon) of the environment (not the avatar) and use the move, scale and rotate tools. For more information on this see this tutorial.

Note: If you don’t see a root transform node you may need to insert one at the top of your scene hierarchy by right clicking and choosing either “insert above” or “insert below”- Transform. You can then select that transform and scale your model.

2. If your environment model already includes your fixation objects, then you can skip this step, otherwise, go to File- Add and add objects that you would like to measure eye tracking data on. You can then use the transform tools to position, rotate and scale the object(s).

3. On the left you will see your list of nodes in your scene. The red, blue and green node is a group node. IMPORTANT: This is what is needed to collect data on objects.

Usually when you add an object it will come in as a group node. If not you will need to right click and select “insert above- group” on any node you wish to add as a fixation object (i.e. objects that will collect and store eye tracking fixation data).

You will also need to add a group node above any objects that are already part of your model that you wish to collect data on. To move, scale or rotate the objects, click on the transform node (with the gear icon) and use the transform tools. If there is no transform node, right click and choose Insert Above- Transform.

4. Rename the objects by right clicking on the group node and selecting “rename”. (NOTE: It is important that the name of the object is not the same as a name that already exists in the model. It would be good practice to add the tag “_group” next to your objects you want to collect data on. 

5. To add an area of interest, simply drag in one of the objects labeled as “FixationRegion”, then scale and place it in your scene over an area you want to collect fixations on. You can also right click and rename the area of interest. For having this area not be visible to the user you will then need to go into the code.

You can also add things such as lights, backgrounds and more with Inspector. To add a light, go to Create- Light and choose Directional, Point or Spot Light. Clicking on the light will bring up its attributes on the right.

6. Save this scene to the resources- environment folder in your SightLab installation folder and give your scene a name.

You now have your eye tracking experiment setup to run in your eye tracker of choice and collect data and visualize it.

Acquiring Avatars or Using Pre-Installed Ones

The multi-user version of Sightlab comes with avatars you can use right away, or you can create your own using something like ReadyPlayerMe. 

Here are the steps to utilize the “Ready Player Me” avatar heads in Vizard, where you can generate an avatar head from a single selfie (using webcam or upload an image) and download as a .glb file.

Go here to make your avatar from a single selfie https://readyplayer.me/

Download .glb file.

Open in Vizard’s Inspector (open Vizard and choose Tools- Inspector).

Since you will just be tracking the head and hands, you will remove the body by selecting each body part and then clicking delete (so you only leave the head).

Click on “Amature” on the left and then the rotate tool on top to rotate the head in Yaw (first value for rotation) 180 degrees.

Use the translate tools to move the avatar head down in Y -1.6235 and then back in Z by -0.155 (can adjust further back in Z if you still see your head in front of your eyes in the headset.

Save the head model from Inspector as a .OSGB model into the resources/avatar/heads folder.

Once you have an avatar head saved, just place it in the resources/avatar/head folder on all clients and it will automatically show up as an option in the dropdown. Controllers are added separately in the resources/avatar/hand folder.

Running the Experiment


Run the SightLab_VR_Multi_User_Server.py script to start the server on your host machine and choose your options (for single user, just run the SightLab_VR.py file). For 360 videos, run the 360 server and client.

(You can also run New_Project_Server_Multi_User.py, but this file is mainly for creating a modified version).

Screen Record - Will record a video of the session and save it in the “recordings” folder (note: the videos are uncompressed and may take up a lot of hard drive space).

Number of Trials - Allows you to set how many trials you want to run. If left blank, the default is unlimited.

Fixation Time - Adjust time in milliseconds required for a fixation on an object (default is 500 milliseconds). This can also be adjusted in the code in the experiment function.

Environment - Choose your environment model you wish to run your session with. Place any additional environment models in the resources- environment folder.

Configure- see below.

Revert to Default Settings- reverts back to the default settings.

Continue- Saves the current configuration and runs the session (last saved configuration will be auto-filled on each run).

After choosing an environment from the dropdown, press “configure” to choose fixation objects. Check or uncheck the objects you wish to collect data on by switching on or off “Fixations”, choose visibility by choosing “Visible” and choose items you wish to grab using “Grabbable”. To add an object manually that is in your model, add the name in the “Child Name” section (this would be for objects that were not added as a group node (see below). When finished click “Continue”.

Next, make sure all the assets that you are using are copied over to all the additional clients. This also includes any avatars in the resources-avatar-heads folder and environment files in resources- environment.

After you’ve copied over the assets, run the SightLab_VR_Multi_User_Client.py script for the first client (this can either be on the same machine or a separate machine) (note: it does not matter which client starts first).

When the script starts you will first choose your hardware from the dropdown.

Next, choose which client number you are using (for first client choose 1, second 2, etc.)

Next, input the computer name (or IP address) of the host machine running the server (note: this can also be hard coded into the script so you won’t have to input this every time). On the With Windows search bar you can type in “About Your PC” to copy the device name of the computer running the server script.

Choose from a list of available avatars that will represent yourself.

Connect any additional clients in the same way, using “client_sightlab.py”, for the second user on the next machine and choosing “client 2” etc.

When you are ready to begin, press spacebar on the server. This will start the timer, start collecting eye tracking data and start the Acqknowledge server to collect physio data on each user if you are using Biopac (this will also trigger to start the video recording from the “server” script).

Each client can look around the environment and fixation data will print out in real time on each client’s mirrored window. Fixation data will also be saved to the experiment_data.txt file

  • Use the ‘P’ key to toggle the gazepoint on and off for the participant. It is always on for the mirrored view.
  • Use the left hand stick to move around and right hand stick to teleport. The trigger will grab any objects you’ve designated as “grabbable”. LH grip will rotate left and RH grip will rotate right.
  • Note: If using desktop mode the mouse will lock once the experiment starts. Use Alt+Tab to go back to the server window (or any other window).

When your experiment is finished, press the spacebar on the server again. You will then see a gaze path on each user’s respective mirrored window,  the data files will be saved in each user’s respective “data” folder, and a video recording (in the server folder under “recordings” )will be saved if you have checked that option.

After you quit, you'll see 3 data files saved in the data folder.

Tracking_data.txt shows a timestamp along with the x,y,z coordinates of the gaze intersect, pupil diameter (if you are using a headset that tracks pupil diameter), and custom flags. See below on how you can add more items to this file.

Experiment_data.txt shows a summary of fixations with the number of fixations per object, total fixation time, average fixation time and a timeline of fixations.

Tracking_data_replay.txt This file is used for the session_replay script and you do not need to utilize it.

You can change the extension .txt to .csv if you wish to view the file in a spreadsheet editor. If you enable recording, a video recording is also saved to the “recordings” folder (note that videos are uncompressed and take up a significant amount of hard drive space).

Session Replay

Session replay currently only works with single users, but you can check “recording” on the server to save a video that will playback the session with all users (this is found in the “recordings” folder). 

Connecting Over Separate Networks

You can connect to other users in a Vizard session that are remotely located using something like Teamviewer to create a VPN connection. This setup will work for remote interactions with objects as well as multi-user eye tracking. Here is how it works with Teamviewer.

  1. Make sure both computers are online.
  2. Download Teamviewer from https://www.teamviewer.com/en-us/download/windows/.
  3. Install Teamviewer as admin.
  4. Run default Installation.
  5. Under Advanced Settings, choose “Use TeamViewer VPN”.

  1. Go back to the main Teamviewer interface and click on the “VPN” dropdown option on the right.
  2. Repeat the same process on the second machine.
  3. On one computer enter “Partner ID” and then password of the second computer when prompted (have to get from partner’s computer).
  4. Now you should be connected.

Connecting to Biopac Acqknowledge

If you are using Biopac Acqknowledge, you can change the line at the top of SightLab_VR _Client_Code  from Biopac = False to Biopac = True. This will then send markers to Acqknowledge when you fixate on objects and give the name of the object.

Connecting Additional Hardware

Each user can connect to many types of hardware. See this page for all supported hardware that can be used. 

For more information on SightLab VR visit our website at https://www.worldviz.com/virtual-reality-eye-tracking-for-research-solutions 

Stay Updated
Subscribe to our monthly Newsletter
CONTACT US 
Phone +1 (888) 841-3416
Fax +1 (866) 226-7529
813 Reddick St
Santa Barbara, CA 93103