Our cyberlab deploys a suite of tools that customize both place-based and cyberlearning experiences using real-time assessment and evaluation, allowing visitors the opportunity to construct knowledge across learning contexts and become active participants in research. The digital nature of these tools allows data collection and analysis to take place locally or remotely. We continue to develop, test, refine, and disseminate these tools (and practices for using them) through our research projects.

Tool 1: Observation and Interaction Systems

Observation systems act as core tools for evaluation and content customization as well as research. These systems “watch” and “listen” and report observations in varied levels of researcher-controlled detail. These observation systems can work independently or in conjunction with the control systems that can be configured to trigger exhibit content changes based on output results from the observation systems.

Facial Detection and Recognition System

Facial detection and recognition engines built into new exhibits use cameras to detect faces, and further, map a visitor’s face and store the pattern in an evaluation database. Once the unique data-print pattern for a particular face has been stored, the system will recognize the same person at other exhibits as well potentially even in subsequent visits – for as long as their file is stored in the evaluation database.

Facial Expression Engine

A facial expression recognition software engine works in tandem with the facial detection/recognition system. As the facial recognition observation system stores the ‘faceprint' of coordinates, the facial expression engine will analyze and record the visitor’s facial expressions. Up to 70 different facial expression data points can be captured and analyzed by this system: eye shape change, eyebrow position, mouth shape and position, and head-tilt to highlight just a few. We will use this data to begin correlating expression with mood, engagement, and satisfaction, and tune exhibits to respond to facial expressions and prompt for exhibit content modifications based on that feedback.

Digital Video Network

A camera and microphone network is woven throughout the center’s target exhibit spaces. While the observation systems utilize these cameras for capturing face data, the system can also be toggled to simply record the exhibit space to a standard Digital Video Recording (DVR) system for particular evaluation questions or procedures.

Audio Recognition Engine

Microphones integrated into target exhibits allow visitors to interact with exhibits by speaking and will be used to collect visitor’s verbal reactions to exhibit content for evaluation purposes, using off-the-shelf speech to text technologies. These engines process audio input into a text transcript, allowing visitor conversations to be run through keyword search analysis looking for evidence of change in ways of conversing about content or the use of information from signs and exhibits during a visit to the science center and over subsequent visits.

Augmented Reality Engine

Augmented reality works by recognizing specific symbols, or GPS coordinates, and then visually replacing them with an alternative image (viewed via handheld screen, or camera enabled exhibits). We use this technology to test alternative universal user interface schemes, as well as offering individual users their own unique interest-driven interactive experiences.

Human Studies Opt-Out Measures - Radio Frequency ID System

RFID systems track the motion of exhibit components especially in ‘build and test’ exhibits, as well as tracking visitor exhibit and interactive ‘quest’ activity use. This technology also allows visitors to ‘opt out’ of the ubiquitous evaluation process.

Accelerometers and Motion Sensing Systems

Accelerometers have become a ubiquitous component in handheld systems and most motion sensing game systems (wii). This technology is easily accessed via Bluetooth and IR networks and can offer researchers information on how exhibit components have been handled, spun, waved, dropped, etc. These sensing systems are critical in evaluating tactile hands-on experiences.

CyberLab Tool 2: Observation Control System

The Observation Control System will allow researchers and evaluators to alter response parameters of the adaptive exhibit content. Customizable conditional filters applied to visitor evaluation data can trigger content change in the exhibit or handheld application based on the researcher's line of inquiry.

An example of this process as a filter script: If the [facial recognition system] sees [visitorX] And…[visitorX] has [most of the time] requested [basic] level interpretive information from [handheld/kiosk] Then…Set [all kiosks] that [sees] [visitorX] to content level [basic]

If for instance a researcher (on-site or off) wishes to run a formative evaluation study on a new Magic Planet data set the system can be set to utilize all methods of data collection. If the [facial recognition system] sees [any visitor] Then…Set the [audio record] to [on while activity] And…Set the [audio transcribe] to [on while activity] And…Set the [video record] to [on while activity] And…Set the [expression recognition] to [on while activity] limit [closest 3 people] And…Set the [initiate survey] [survey 3, 5, 6] to [random] delivery And…Set the [kiosk] [Magic Planet] to record [all keystrokes] [all button presses]

CyberLab Tool 3: Handheld systems

Handhelds are one of the predominant emerging technology trends in museums, supporting both formal and informal science education. Building on work we have already begun, we develop, test and disseminate applications that enable visitors to interact with exhibits and participate in place-based activities using software tools such as augmented reality and user response systems, allowing for customized content delivery and user feedback which facilitates evaluation. Handhelds extend content experiences (as well as data collection) with all of the research platforms.

CyberLab Tool 4: Content Management System (CMS)

The CMS system will act as a centralized exhibit content storage system, as well as the exhibit framework. All computer-based exhibits and handheld applications will draw their content from the CMS. The CMS will allow researchers and evaluators to rapidly prototype cyberlearning exhibits. For adaptive content exhibits, every iteration of interpretive content will have multiple versions to serve audience levels, with room for many more variations as different audiences and audience conditions require testing. Images, videos, augmented reality files, sound tracks, audio files, narration styles, native language preferences will all be stored in variation within the CMS.

CyberLab Tool 5: Evaluation Database

The evaluation database contains all user movements, facial responses to exhibit content, user selections from various interactive opportunities, audio comments made to learning partners, relationship links, and time spent at each learning station/exhibit. The longer the user spends at the facility, the more data we compile on choices, learning styles and understanding. The adaptive exhibits can be configured to respond to user data and alter content to suit individual user needs and learning preferences. Customizable reporting tools offer researchers flexibility for analysis on-site or at their home institution.