On-Demand Sessions

These sessions will be available on demand throughout the entire conference, September 14th – 18th.

ePoster | Ignite | Innovations Demo | Product Demonstration

 


ePosters

AP-CAT: A Comprehensive, Adaptive Web-Based Assessment Platform with Diagnostic Features for AP Statistics


Cognitive Diagnostic Computerized Adaptive Testing for AP Statistics (AP-CAT) is a National Science Foundation funded project aiming to enhance high school statistics education by combining modern information technology and state-of-the-art testing approaches. Our item bank has 842 well-designed items mapped to a knowledge tree with four sections, 16 main topics, and 157 learning attributes. This allows us to provide individual diagnostic feedback to students regarding their strengths and weaknesses, and group-level score reports to teachers to help them adjust teaching strategies and content. The platform adopts an integrated test delivery strategy to serve multiple purposes: 1) (Bi-)Weekly assignments assembled by teachers by selecting items from the large bank; 2) Bi-monthly sectional linear test designed by our team to assess student learning on different topics; and 3) Annual adaptive test designed by our team that mimics the content composition of the actual AP statistics exam.

Presented by:

  • Cheng Liu, University of Notre Dame

Are Learning Sciences and Psychometrics Strange Bedfellows? Challenges and Opportunities in Applying Learning Sciences to the Design of Assessments


How compatible are learning sciences and psychometrics? Both fields make inferences about candidate cognition based on manifested behaviors. However, they diverge in how they conceptualize the design and use of assessments. The aim of this presentation is to highlight fundamental differences in the two paradigms, suggest ways these differences can be addressed, as well as propose implications and opportunities in the landscape of assessment.

Presented by:

  • Natalie Jorion, PSI

Learning Credits: Creating a New Learning Currency


One of the hottest technical certifications and a leader in the gaming and interactive industries, this certification body worked directly with two partners to introduce a Learning Credit currency. Training and enterprise customers can now acquire, spend, and manage Learning Credits.

These customers issue learning products to candidates which enhances adoption, retention, and expansion of the community. Through a branded ecommerce marketplace, Learning Credits may be purchased by training and enterprise customers to provide versatility, visibility and management of approved learning products.

Meeting a critical business need, the certification body now has the ability to view these various accounts from a centralized hub which enables them to monitor both sales and consumption across the Learning Credit ecosystem.

Presented by:

  • Kevin Brice, Xvoucher

The World Has Changed: Is Your Learning Strategy Still Relevant?


The cost of learning resources for standardized tests can hinder a students’ ability to get ready to take high-stakes exams. Learn how a mission-driven nonprofit organization developed a business model that enables learners to reach their highest potential, while also delivering a positive, trusted learning experience. Hear the results of a two-year study looking at the efficacy of this company’s online test-preparation program and the difference in outcomes for students.

Presented by:

  • Celeste Martinell, BenchPrep
  • Leslie De Leon, AAMC

How SAP has Digitally Transformed their Certification Program


SAP is the market leader in enterprise application software. SAP runs a certification program in the cloud for employees, customers and partners and offers over 150 certifications and delivers exams in up to 20 languages, with a truly global program.

This session describes latest developments in the SAP certification program focusing particularly on the advantages and disadvantages of certification in the cloud, and how it’s possible to run a certification program that is truly global. We will describe SAP’s “stay current” process which is extremely agile and allows the update of exams rapidly in order to keep up with the speed of SAP’s software changes and frequent release cycles of cloud solutions.

We will cover the following areas:

  • Improvements in process possible due to use of the cloud
  • How translations are managed and processed
  • Candidate experience of certification in the cloud
  • Challenges with the cloud model
  • Integration with digital badges
  • Technology/infrastructure used

Presented by:

  • Daniela Kelemen, SAP SE
  • John Kleeman, Questionmark

Improving Multiple Choice Items -- Three Options v. Four Options


Despite recent technological advancements in assessment, multiple-choice (MC) items remain a staple due to the speed and cost associated with writing and scoring them. Four option items appear to be the gold standard, as they strike a nice balance between the ease of creating plausible options and limiting the odds of answering correctly by guessing. However, research (Rodriguez, 2005) challenges the commonly held belief that four options are superior to three.

This study evaluates the current functioning of four option MC items across all objective assessments within one university and tests the hypothesis that writing three option MC items is quicker and cheaper than four option MC items through an item writing activity in which three option MC items will be written alongside four option items. Final costs will be tallied and compared and focus groups and surveys will be conducted with item writers to gain insight into the speed and ease of constructing these two item types.

Presented by:

  • Jason Meyers, Western Governors University

Improving the Utility of Competency-Based Performance Assessments via Cognitive Design


As online competency-based higher education increases in prevalence among working adults, the quality of its assessments must evolve and meet the standards necessary to predict work-related competencies. Performance-based assessments that simulate job tasks represent a viable method of assessing these competencies. To maximize validity of score use, it is necessary to delve deeper into cognitive components of task stimuli and the response process and link them to the underlying competency using cognitive theory and design. Consequently, one can effectively target the range of the competency by systematically varying the difficulty level of assessment tasks. Discussion centers on the effectiveness of cognitive design in CBE performance assessments for IT courses and implications for differentiating mastery and non-mastery students and customizing assessments to students for competency level feedback at various stages toward course completion.

Presented by:

  • Heather Hayes, WGU

Is This an Outcome to Add to my Report?


One of the key-questions in test-design is: ‘How many items do I need to reliably report on a learning goal within a test?’ In this presentation techniques are provided to determine the number of items that is necessary in different situations and test designs.

Using Bayesian evaluation of diagnostic hypothesis we can distinguish between the response behavior of masters and non-masters. Practically it implies that for a response pattern on a group of items it is evaluated if this pattern is more that of a master or more in line with the behavior of a non-master.

We guide the audience to the steps needed to apply this technique and help them to evaluate their own test or test-design. We also provide results of a small simulation study evaluating different hypotheses for mastery and non-mastery, different number of items and the effect of harder or easier items.

Presented by:

  • Anton Beguin, Cito
  • Hendrik Straat, Cito

Iterative Item Incubation for Incessant Inspection: Developing and Maintaining Highly Scrutinized Longitudinal Assessments


Longitudinal assessment programs are becoming more prevalent in the world of certification/licensure. These programs incorporate components of adult learning theory and spaced education to increase the formative value of testing to the test taker, while still providing the testing organization with sufficient information to make summative decisions. To accomplish this goal, longitudinal programs focus on more continuous assessment and feedback throughout a practitioner’s career.

This ePoster session will describe the end-to-end development and evaluation cycle of exam items in a longitudinal assessment program seeking to fulfill summative and formative purposes. A case study, that of a longitudinal program built from the ground up, will serve to demonstrate the challenges posed in rapidly developing content that is suitable for multi-platform administration/viewing and making the content available to test-takers for review, in near-perpetuity, following the initial administration.

Presented by:

  • Allie Daugherty, American Board of Pediatrics
  • Robert Furter, American Board of Pediatrics

JAWS Doesn’t Bite! Experience Test Delivery with Assistive Technologies


Balancing innovation, comparability, and accessibility can seem like a huge undertaking for any test creator, even in 2020. We all want to move beyond a purely ‘box ticking’ exercise, and make inclusivity core to all test development processes. However, with the right software and mindset, accommodating screen readers and the test takers that use them is a huge step forward for many test creators on the journey to making their exams truly accessible.

In this interactive e-Poster session, we invite you to take a short test using accessibility tools common to a candidate with a severe visual impairment. Here you will experience first-hand the JAWS screen reader working within a test driver, and examples of content that can make a significant difference to a candidate's experience.

You’ll leave the session with a stronger insight into the experience of candidates with accessibility needs and a few simple steps you can make to improve the assessment experience for everybody.

Presented by:

  • Richard London, Surpass, Powering Assessment
  • Neil Dickinson, BTL Surpass

Scoring Short-Answer Items on a High-Stakes Medical Licensing Examination: An Application of Natural Language Processing and Machine Learning


The increasing reliance on constructed-response (CR) items in large-scale assessment reflects an interest in broadening the constructs measured in a rapidly evolving landscape. However, the use of CR items, while promising, raises many psychometric and practical challenges, mostly because the latter strongly rely on human scoring. Automated scoring (AS) offers a promising alternative for supplementing or even replacing human scoring of CR items but its use must be implemented in a manner that promotes psychometric best standards. While automated essay scoring has and continues to be used operationally in many testing programs, applications of AS with short-answer items has received less attention. The current session will outline an application of AS for short-answer items included in a bilingual high-stakes medical licensing examination.

Presented by:

  • Maxim Morin, Medical Council of Canada
  • André De Champlain, Medical Council of Canada

Trust But Verify: Analyzing Vendor Data for Anomalies


When working with vendors for computerized testing it is important to remain vigilant for potential anomalies in the data collected. Vendors generally work with many different clients and programs; however, there is no guarantee that results and processes are error-free and the quality of all data should be reviewed. Typically verification of data is well understood from a test publication perspective, as there multiple reviews of the converted examination content. However, it is equally important to conduct reviews of the response data collected post examination. Vendor meta-data, such as key strings, reports, candidate comments, survey data, response times, and demographic information are all examples of data that may flag issues that require further investigation. Anomalies identified and addressed play an important role in improving the quality assurance aspect of a program.

Presented by:

  • Nicholas Williams, American Osteopathic Association
  • Tara McNaughton, American Osteopathic Association
  • Maria Incrocci, American Osteopathic Association

Top

 

Ignite Session

A Blueprint for the Future: The Crucial Role of Exam Blueprints


When creating standardized exams, one the of the most critical steps is creating a blueprint, framework, or content outline for the exam. It’s a step often missed in the rush to create an exam, but this Ignite Session explores the steps behind the process, including the reasons for creating an exam blueprint, and why they should be shared with your stakeholders.

Using practical examples, this session will showcase the importance and power of developing an exam blueprint, a crucial step in the exam development process.

Presented by:

  • Neil Wilkinson, Pearson VUE

Top

 

Innovations Demo

A Reporting Dashboard to Provide a New Level of Transparency in Exam Integrity


As more testing programs consider or adopt online proctoring as a secure way to deliver credentialing exams, it is becoming more important to present objective data points that prove its effectiveness. This demonstration will showcase our new Facilitator Dashboard that allows sponsors a lens into each exam window. Sponsors can see data in aggregate or drill down to individual session details. The dashboard contains real-time exam data such as scheduling metrics, prevention metrics, metrics about unpermitted resources, integrity breach metrics, technical troubleshooting metrics, candidate satisfaction scores, start time and exam duration metrics, and more. It is our hope that this demonstration will spark a discussion around data transparency and the need for more insight into proctoring data regardless of the modality.

Presented by:

  • Jenny Rankin, ProctorU
  • Christopher Brown, ProctorU

CRASE: Automated Scoring Engine Model Training & Operation at Scale


Automated scoring has been embraced by many assessment programs as an innovative approach to scoring short and extended text responses quickly and with a lower cost than typical human scoring. But how are scoring models created and evaluated? This session will showcase the CRASE Automated Scoring engine model training and operational scoring.

The experiences in the demo will include:

  1. Attendees will learn about how a scoring model is created;
  2. Attendees will see a scoring model trained and deployed in real time;
  3. Attendees will learn how automated scoring systems ensure integrity and validity of scoring;
  4. Attendees will have an opportunity to submit responses to the CRASE API to witness their responses being scored in real time

Presented by:

  • Scott Wood, ACT

Exam Content Crowdsourcing: Overview and Demonstration for Generating Items


pValue is an online marketplace that connects content and community experts with certification providers, who are looking to launch new exams. We are changing the business model of content generation and are modernizing the process to launch and create new exams. Our value is to add rigor, psychometric validity and efficiency to this new exam crowdsourcing model. Our demonstration will consist of walking-through our community content tools and the process for launching new crowdsourced exams.

Presented by:

  • Kate O'Connor, Certiverse

How Remote Proctoring is Creating Equal Opportunity in Education


Good educational outcomes matter, not only at the individual level but societal level, as well. Academic dishonesty has always been a threat to institutions, however, remote proctoring solutions are protecting the integrity of assessments, while leveling the playing field for students everywhere. By creating educational equality, more students around the world are given the opportunity to have a quality education, and with the definition of a "student" shifting to lifetime learner, it is crucial for institutions to provide a scalable solution for education. Using remote proctoring solutions is diversifying the educational landscape and enhancing inclusivity through assessment. Students and instructors alike are able to customize their educational experience and demonstrate learning on their own time, from almost any location around the world.

Presented by:

  • Olivia Hood, Proctorio

Longitudinal Assessments in LearningBuilder


This demonstration shows how longitudinal assessments can serve as an alternative to high stakes examinations for recertification. It shows how a program might construct a longitudinal assessment scheme, administer exams over time, provide feedback to certificants, and avoid the inconvenience and expense of high stakes assessments.

Presented by:

  • Christopher Butcher, Heuristic Solutions

Top

 

Product Demonstrations

ATA's Newest AI-Assisted Remote Test Proctoring


In order to meet increasing demands for remote test proctoring due to the recent Covid-19 pandemic, ATA has recently upgraded its AI-assisted remote test proctoring (AI-ARTP) system, based on findings and lessons from its recent large-scale applications in China. Through vivid live-audience participation, this demonstration is to highlight its new key features, including its smart-phone and computer-camera dual monitoring, AI-identifications of a wide range of suspected security violations from improper examinee behaviors to prohibited objects, latency analyses and eye-tracking. Actual measures on security and protections of data, network and privacy will also be explicated. It is hoped that the audience will gain further concrete understanding and experiences on what constitutes a trust-worthy AI-assisted remote test proctoring system with regards to its industrial-grade reliability, accuracy and ethics.

Presented by:

  • Alex Tong, ATA
  • Neil Yao, ATA
  • Lingxiao Ou, ATA

Remote Means More than Delivery - Maintaining a Secure, Collaborative Test Development Workflow from Home


Regardless of what the world is currently throwing at us, we appreciate your item bank still needs to be refreshed, revised, and replenished. Your passing scores still need to be set. It’s not as easy to do this in a remote setting as it is with in-person workshops due to potential security risks and disrupted processes, right? With the right software and ideas, remote collaborative test development could actually be more productive, cost-effective, smoother, and more focused. You’ll see that a virtual process can be as rewarding for your SMEs and your program as an in-person workshop.

Virtual content development and standard-setting sessions need more structure, take more planning, are less organic, and may span a longer timeline than in-person workshops. In this product demonstration, we’ll show you how tools can be used to plan, communicate, author and manage your virtual test development process. Examples shown will include task management tools, to help keep item writers on track. Options for virtual conferencing including some little known features available in common platforms. And examples of some new interactive collaborative tools for running virtual breakout sessions and gathering feedback. There will be plenty of takeaways for you and your team from this session.

Presented by:

  • Ben Brady, BTL Surpass

Top

 

View the full program listing
Special thanks to our
Platinum Sponsors