Many of us in the ergonomics world have been watching motion capture technology for years. Clients and ergonomists are excited about recent advances that allow us to gather information using just a phone video camera. Is it too good to be true? Let’s take a step back and look at the history, before we consider the current state of the technology.
I did my master’s degree in the 90’s. As part of the work our lab was doing, I had to digitize images; I learned to use a mouse (no kidding!!) and sat for hours, clicking on markers in video frames, so the computer could analyze body postures. I doubt anyone in the younger demographics can imagine manually marking 24 frames for every 1 second of video. It was tedious work.
Since then we’ve come a long way. We now have “markerless” video, meaning that software can locate the joints, follow the body parts over time, and then calculate how much time each spends in pre-determined “awkward” postures. We can calculate movement frequencies without a stopwatch! The software outputs are flashy – they code hazards as “red”, “yellow” or “green” so anyone can see where the problems are. So, is it too good to be true? In short, yes.
The technology has gotten ahead of itself. I say this for three reasons:
1. The software can only analyze what it sees.
2. Motion capture does not account for force.
3. The tools are not sensitive to small changes in exposure.
What it sees
The software analyses a video clip of the job. If you take a 2-minute video of a job, you’ll have one view of one specific worker performing a task. It’s up to you, the analyst, to ensure that:
- the task represents normal work demands
- the worker represents a typical worker
- all body parts can be viewed on the video
It’s not possible to capture all body parts in one shot, so the software either guesses at what the obstructed body part is doing, or just removes it from the analysis, and relies on the part that is showing.
I’m not confident about how well motion capture software will analyze the hand and wrist, given that these postures are complex, and it’s hard to get a good photograph from any angle, let alone a video.
Unless you are obtaining two different views of the worker simultaneously, the video can only estimate joint angles in one plane. Typically, the software will report shoulder flexion (reaching forward), and back and neck flexion (bending forward). It cannot measure sideways reaching or bending, or twisting.
So, motion capture technology quantifies a selected worker’s shoulder, back, and neck postures in the forward and back direction, where the body part is not obstructed in the video. Is this really enough?
Accounting for force
Ergonomists talk a lot about the “three primary hazards for musculoskeletal disorders (MSD)”:
- Awkward posture
- Repetitive or sustained exertions
- High force
Most ergonomists now agree that force is the most important hazard. It’s not enough to ask, “Is it a “high force?” To assess risk properly, we need to know how much force is used, in what direction, and in which posture. A 5 kg force feels negligible if you’re holding it in two hands close to your body, but could exceed your strength if it’s applied at arm’s reach in a sideways direction. (I’ve written about this before.)
A video obviously can’t measure how much effort is applied. To assess “force”, the analyst must:
- measure the amount of effort using a gauge or a force-matching process
- document the direction that the force is applied, and the grip and body position used to apply it
- measure the frequency and duration of the effort (Perhaps a good use for video tools!)
- if force demands vary from time to time, the ergonomist must take multiple measurements
If the marketing materials for motion capture software say you can assess a job in minutes, the marketers are not accounting for the time required to measure force demands. Some motion capture tools suggest that an estimate can be used. Think about the can of worms that is opened when you start to use worker estimates of force. Which worker should you ask? Some software will allow the analysis to proceed without a force being entered. Failing to account for force grossly underestimates MSD risk, resulting in projects with repetitive awkward postures being prioritized over forceful, awkward tasks.
Sensitivity: Laser-guided RULA
I wish I knew who coined this phrase, which has been bounced around in recent ergonomics discussions. RULA (Rapid Upper Limb Assessment) is a checklist that has been around for 30 years, so lots of validation studies have shown that high scores are associated with high injury risk. But let’s be clear – most industrial jobs score “high” on RULA. To yield a low RULA score, a task would need to be performed in a near perfect body position, with less than 2 kg of effort, at a frequency of less than 4 times per minute. The RULA tool provides a score on a 7-point scale, with 5-6 meaning “medium risk, change soon” and 6-7 meaning “very high risk, implement change now”. RULA categorizes force in three “bins”: less than 2 kg, between 2 and 10 kg, or more than 10 kg. It doesn’t matter what direction the force is applied – lifts, pushes, and pulls in all direction are scored the same.
Most motion capture tools are using tools like RULA to loosely estimate posture (because they can only see what the video shows), calculate the joint angles and time parameters with apparently high precision, select one of a few choices for force, and then calculate a score based on a tool that outputs only 7 levels. (Most tools also incorporate the “REBA” score to account for the rest of the body.) We’re applying “laser-guided” technology to a pen and paper checklist tool.
Currently available motion capture analysis tools are running very basic screening tools, sometimes in a “black box”, meaning that the developers do not disclose which tools they have incorporated. The tools might be useful for “screening” – taking a very quick look at all of your jobs and figuring out where to start. (Unfortunately, you may be hard-pressed to find any jobs in your facility that are not hazardous.) If you know that a job is causing injuries, perhaps you only need a tool to confirm that the job presents hazards. As marketed, the tools are quick to use and yield colour-coded outputs.
If you don’t have an injury history to flag a job as hazardous, or you want to explore how various proposed changes will affect a job, you’ll be disappointed, because the tools won’t recognize small changes in exposure. Despite the advanced technology, these are not advanced analysis tools.
So, what should ergonomists do? Stay tuned for part 2 of this series, in 2 weeks.
How reliable is motion capture technology in assessing hand and wrist postures, given their complexity and difficulty in capturing accurate images or videos from various angles? Assessing hand and wrist postures with motion capture technology presents a potential challenge due to their complexity, and the difficulty of obtaining accurate images or videos from various angles. The blog expresses skepticism about how well the software can analyze these intricate postures, raising doubts about the effectiveness of motion capture in this regard.
Considering the importance of force as a hazard in ergonomics, how do motion capture tools address the measurement of force, and what challenges arise when relying on estimates or not accounting for force in the analysis? The blog underscores the significance of force as a hazard in ergonomics and questions how motion capture tools address the measurement of force. It highlights the limitations of video in quantifying effort and emphasizes the need for specific measurements, including force direction, grip, body position, frequency, and duration. The reader is prompted to consider the potential shortcomings and challenges associated with estimating force or not incorporating it adequately in the analysis.
The blog mentions that motion capture tools often use checklist tools like RULA for posture estimation. What are the limitations of using such tools, and how well do they correlate with the precision claimed by motion capture software, especially when dealing with tasks that involve varied force and multiple directions of movement? Motion capture tools are noted to use checklist tools like RULA for posture estimation, but the blog suggests that this approach has limitations. The reader is left questioning how well these checklist tools align with the claimed precision of motion capture software, especially in tasks involving varied force and multiple directions of movement. The concern revolves around whether the motion capture technology truly enhances analysis or if it merely applies advanced technology to basic, potentially outdated checklist tools.