UbiComp2019_EduSense
2021-04-30 17:48:37 0 举报
AI智能生成
智慧课堂分析顶会论文
作者其他创作
大纲/内容
Abstract
<span style="font-size: inherit;">1) High-quality opportunities for professional development of <b>university teachers</b> need <b>classroom data</b>.<br></span><span style="font-size: inherit;">2) Currently, there is no effective mechanism to give personalized formative feedback except manually.</span><span style="font-size: inherit;"><br></span>3) This paper shows a <b>culmination of two years</b> of research: <b>EduSense (with visual and audio features)<br></b>4) EduSense is <b>the first</b> to unify <b>previous isolative features</b> into a cohesive, real-time, and practically-deployable system<br>
Key Words
Classroom, Sensing, Teacher, Instructor, Pedagogy, <br>Computer Vision, Audio, Speech Detection, Machine Learning<br>
Introduction
> 增加学生在课程中的<b>投入度和参与度(engagement and participation)</b>被证明可以有效提升学习产出;<br>> 与K-12的教师相比,大学教师一般仅仅是<b>领域专家(domain experts)</b>,而不擅长如何教学生<br>
> 正常且规律的教学反馈对教师提升教学技能很重要,想要习得<b>教育学技巧(pedagogical skill)</b>并不容易<br>> acquiring regular, accurate data on teaching practice is currently not scalable<br>> 当今的教学反馈数据严重依赖<b>专业人士的观察(professional human observers)</b>,而这是非常昂贵的<br>
> EduSense captures a wide variety of <b>classroom facets</b> shown to be actionable in the <b>learning science literature</b>, at a scale and temporal <br>fidelity <b>many orders of magnitude beyond what a traditional human observer</b> in a classroom can achieve.<br>> EduSense captures both <b>audio and video </b>streams using <b>low-cost commodity hardware</b> that views both the instructor and students<br>> Detection: <font color="#f15a23" style=""><b style="">hand raises, body pose, body accelerometry</b>,</font> and <b style=""><font color="#f15a23" style="">speech acts. </font><font color="#381e11">Tabel-1 is the detail.</font></b><br>
> EduSense是首个将之前所有众多单个教学场景特征融合在一起的系统<br>> EduSense力求做到两件事:1)为教学者提供教育学相关的教室上课场景数据供其练习成长,2)成为一个可拓展的开放平台
Related Systems
> There is an extensive <b>learning science literature </b>on methods to improve instruction through training and feedback. <br>> [15] [26] [27] [32] [37] [38] [77] [78] PS:好像全是CMU的文章<br>
<b>2.1 Instrumented Classrooms (仪器教室)</b><br>> 使用一些传感器(如pressure sensors [2][58])收集课堂中学生的数据,或者使用仪器测量教室的物理结构。<br><ul><li><span style="font-size: inherit;">adding computing to the tabletop (e.g., buttons, touchscreens, etc.) or with response systems like "clickers" [1][12][20][21][68]</span></li><li><span style="font-size: inherit;">low-cost printed responses using color markers [25], QR Codes [17] or ARTags [57]</span></li></ul>> 使用可穿戴设备直接搜集精确的关于学生或教师的信号<br><ul><li>Affectiva’s wrist-worn Q sensor [62] senses the wearer’s skin conductance, temperature and motion (via accelerometers)</li><li>EngageMeter [32] used electroencephalography headsets to detect shifts in student engagement, alertness, and workload</li><li>Instrument just the teacher, with e.g., microphones [19].</li></ul>> 缺点:带来了社交障碍、审美损失和实际应用的成本提升(carries a social, aesthetic and practical cost.)<br><br>
<b>2.2 Non-Invasive Class Sensing (非侵入式等级感应)</b><br>> 我们的初衷是使用尽量少的入侵式设备来最大化应用价值。在众多的非入侵式传感器中,声音和视觉(<b>acoustic and visual</b>)几乎是课堂感知必备的<br>> <b>Speech</b><br><ul><li>[19] used an omnidirectional(全方位的) room microphone and head-mounted teacher microphone to automatically segment teacher <br>and student speech events, as well as intervals of silence (such as after teacher questions).</li><li>AwareMe [11], Presentation Sensei [46] and RoboCOP [75] (Oral presentation practice systems 口头演讲练习系统)compute speech <br>quality metrics, including <b>pitch variety</b>, <b>pauses and fillers,</b> and <b>speaking rate</b>.<br></li></ul>> <b>Cameras and computer vision</b><br><ul><li>Early systems, such as [23], <b>targeted coarse tracking</b> of people in the classroom, in this case using <b>background subtraction</b> and <br><b>color histograms</b>.</li><li>Movement of students has also been <b>tracked</b> with <b>optical flow</b> algorithms, as was demonstrated in [54][63]</li><li>Computer vision has also been applied to automatic detection of <b>hand raises</b>, including classic methods such as <b>skin tone</b> and <br><b>edge detection</b> [41], as well as newer <b>deep learning techniques</b> [51](<font color="#f15a23">我们实验室的文章,linjiaojiao的举手检测</font>).<br></li></ul>> <b>Face Detection<br></b><ul style=""><li style=""><span style="font-size: inherit;">It can not only be used to </span><span style="font-size: inherit;"><b>find and count students</b></span><span style="font-size: inherit;">, but also </span><span style="font-size: inherit;"><b>estimate their head orientation</b></span><span style="font-size: inherit;">, coarsely signaling their area of <br>focus [63][73][80].</span></li><li style="">Facial landmarks can offer a wealth of information about students' <b>affective state</b>, such as <b>engagement</b> [76] and <b>frustration</b> [6][31][43], <br>as well as detection of off-task behavior [7]</li><li style="">The <b>Computer Expression Recognition Toolbox (CERT)</b> [52] is most widely used in these educational technology applications, though <br>it is limited to videos of single students.</li></ul><br>
<b>2.3 System Contribution</b><br>> 按例先踩一下上述的各种教室感知系统:<br> 1)都是独立发表各项孤立指标,且没有在真实的大规模课堂场景下进行过测试和验证<br> 2)各个系统都是针对单间教室配置单台服务器,不能在校园层面大规模推广<br> 3)这些文献中的系统很少处理教学教育用途,因此没有考虑到在复杂的教室场景中使用最新的大量取得突破发展的计算机视觉和深度学习技术<br>> <b>Thus, we believe EduSense is unique in putting together disparate advances from several fields into a comprehensive and scalable <br>system, paired with a holistic evaluation combining both controlled studies and months-long, real-world deployments.</b><br>
EduSense System
Four key layers: <b>Classrooms layer、Processing layer、Datastore layer、Apps layer</b><br>
<b>3.1 Sensing<br></b>> Early system:depth cameras <br>> Current system:Lorex LNE8950AB camreas offer a 112° field of view and feature an integrated microphone, costing around $150 in <br>single unit retail prices. It can capture 3840x2160 video (i.e., 4K) at 15 FPS with 16 kHz mono audio.<br><br>
<b>3.2 Compute</b><br>> Early system:<br> * small Intel NUCs. However, this hardware approach was <b>expensive to scale, deploy and maintain<br></b> * 前期版本的系统是一个庞大而单一的(monolithic)C++应用程序,不但容易遇到各种如依赖冲突和加入新模块引起过载等软件工程问题,而且软件的远程部署同样是一个让人头疼的问题。<br> * 另外,这些C++版本的代码很难和计算机视觉最常用的python语言相结合,即使强行合并,也是耗时且系统极不稳定。这个旧版本的系统也因为各个组件模块之间没有相互隔离而很容易发生各种错误或崩溃掉。<br>> Current system:<br> * 新的系统使用了更加稳定的IP cameras,配合布置在学校中心的服务器,两者之间再通过RTSP协议实时传输音频和视频流,形成新的系统框架。<br> * The custom GPU-equipped EduSense server has 28 physical cores (56 cores with SMT), 1<b>96GB of RAM</b> and <b>nine NVIDIA 1080Ti GPUs<br></b> * 新系统使用了docker容器技术(container-based virtualization),将各个模块孤立开单独执行,docker的优势无需赘述。<br>
Fig. 3. Processing pipeline. Video and audio from classroom cameras first flows into a scene parsing layer,<br> before being featurized by a series of specialized modules. See also Figure 1 and Table 1.<b><br>3.3 Scene Parsing<br></b>Techniques<br>> Multi-person body keypoint (joints) detection: <b>OpenPose (</b>tested and tuned OpenPose parameters<b>)<br></b><span style="font-size: inherit; font-weight: bold;">> </span><span style="font-size: inherit;">Difficult Envoriment:high, wall-mounted (i.e., non-frontal) and slightly fish-eyed view.<br></span><span style="font-size: inherit;"><b>> </b>Algorithm<b>:additional logic</b></span><span style="font-size: inherit;"> to reduce false positive bodies (e.g., bodies too large or small); interframe persistent person IDs with hysteresis <br>(tracking) </span>using a combination of Euclidean distance and body inter-keypoint distance matching<br>> Speech:predict only silence and speech (Laput et al. [48].) + An adaptive background noise filter<br><br>
Fig. 4. Top row: Example classroom scenes processed by EduSense (image data is not archived; shown here for reference<br> and with permission). Bottom row: Featurized data, including body and face keypoints, with icons for hand raise, upper<br> body pose, smile, mouth open, and sit/stand classification.<br><br>
Fig. 5. Example participant from our controlled study. EduSense recognizes three upper body poses (left three image)<br> and various hand raises (right four images). Live classification from our upper body pose (orange text) and hand<br> classifiers (yellow text) are shown.<b><br>3.4 Featurization Modules<br></b>> 见图1和图3,特征化模块主要利用检测和识别算法的结果,将其按照教室中的指标可视化,便于调用或debug时查看<br>> For details:open source code repository (<b>http://www.EduSense.io</b>).<br><ul><li><b>Sit vs. Stand Detection</b>:relative geometry of body keypoints(neck (1), hips (2), knees (2), and feet (2).)+ MLP classifier<br></li><li><b>Hand Raise Detection:</b>Use eight body keypoints per body(neck (1), chest (1), shoulder (2), elbow (2), and wrist (2).)+ MLP classifier</li><li><b style="font-size: inherit;">Upper Body Pose:</b><span style="font-size: inherit;">eight body keypoints + multiclass MLP model(预测arms at rest, arms closed (e.g., crossed), and hands on face 见上图5)</span></li><li><span style="font-size: inherit;"><b>Smile Detection</b>:</span>use ten mouth landmarks on the outer lip and ten landmarks on the inner lip + SVM for binary classification</li><li><b style="font-size: inherit;">Mouth Open Detection</b><span style="font-size: inherit;">:(As a potential, future way to identify speakers.) two features from [71] (left and right/mouth_width) + Binary SVM</span></li><li><span style="font-size: inherit;"><b>Head Orientation & Class Gaze</b>:perspective-n-point algorithm [50]</span> + anthropometric face data [53] + OpenCV's calib3d module [8]<br></li><li><b style="font-size: inherit;">Body Position & Classroom Topology</b><span style="font-size: inherit;">:借助前面提到的人脸关键点和相机标定,估测学生的位置,并将投影合成俯视视角(top-down view)图像(PS:类似我们系统中的学生定位,这里更粗略,不检测行列,也不涉及学生行为匹配)</span></li><li><b style="font-size: inherit;">Synthetic Accelerometer</b><span style="font-size: inherit;">:</span><span style="font-size: inherit;">simply track the motion of bodies across frames + 3D head position + delta X/Y/Z normalized by the <br>elapsed time</span></li><li><b style="font-size: inherit;">Student vs. Instructor Speech</b><span style="font-size: inherit;">:</span><span style="font-size: inherit;">sound and speech detector including <br>1) the </span><b style="font-size: inherit;">RMS</b><span style="font-size: inherit;"> of the student-facing camera’s microphone (closest to the instructor), <br>2) the </span><b style="font-size: inherit;">RMS</b><span style="font-size: inherit;"> of the instructor-facing camera’s microphone (closest to the students), <br>and the ratio between the latter two values + random forest classifier (目的是区分当前的说话声来自学生还是老师,PS:区分教师音和学生音?)</span></li><li><b style="font-size: inherit;">Speech Act Delimiting</b><span style="font-size: inherit;">:Use </span>per-frame speech detection results???(PS:这里是要检测不同的语音片段吗?)</li></ul><br>
Fig. 6. Left: Training data capture rig in an example classroom. Right: Closeup of center mast, with six cameras.<b><br>3.5 Training Data Capture</b><br>> 首先,各种指标的实现需要大量标注过的数据作为训练集,这里遇到两个问题:<br>1)需要招聘大量人员参与标注,如举手<br>2)需要采集不同视角下的多样化的数据,因此需要自己布置采集数据的硬件设备和场景<br>
<b>3.6 Datastore</b><br>1)<b>non-image classroom data </b>(ASCII JSON),250MB for one class lasting around 80 minutes with 25 students<br>2)<b>Infilled data </b>(realtime class video), about 16GB for one class at 15FPS with 4K every frame for both front and back cameras<br>3)Web interface (Go APP) and MongoDB bulid a backend server. Also REST API + Transport Layer Security (TLS) (<b>不同的技术路线和技术细节</b>)<br>4)<b>We do not save these frames long-term to mitigate obvious privacy concerns (数据不长期保存,一删了之,避免隐私问题)<br></b>5)secure Network Attached Storage (NAS)
<b>3.7 Automated Scheduling & Classroom Processing Instances<br></b>> scheduler:SOS JobScheduler (技术路线不同,我们使用的是python平台下的开源调度器<b>apscheduler</b>)<br>> FFMPEG instances:record the front and back camera streams (技术路线不同,我们使用的是<b>opencv)</b>
<b>3.8 High Temporal Resolution Infilling</b><br>EduSense包括两种数据处理模式:<b>real-time mode(0.5FPS);infilling mode(15FPS的视频)</b><br>> real-time模式,顾名思义需要在课程进行时同时出现各种分析指标,目前的效率是两秒钟一帧<br>> infilling模式,是在课程同时进行或课后进行的非实时分析,提供了高时序分辨率(high temporal resolution),是实时处理系统的补充。另外,这种更精确的分析还可以用于后续的end-of-day reports或semester-long analytics
<b>3.9 Privacy Preservation</b><br>> <b>已经采取的措施</b>:EduSense不专门存储课堂视频;如果需要infilling模式,会在临时缓存中暂存,并在分析完成后删除这些视频;控制用户分权限分角色访问教室数据,防止数据泄露;追踪学生个体,但是并没有使用私密信息,且每节课tacking分配的ID互相之间没有关联;暂存的用于后续发展的视频(包括测试、验证和标注后扩充数据集),将在使用后被及时删除<br>> <b>未来将要采取的措施</b>:仅仅只展示高阶抽象的课堂指标数据(class aggregates);
Fig. 7. Although EduSense is mostly launched as a headless process, we built a utilitarian<br> graphical user interface for debugging and demonstration.<b><br>3.10 Debug and Development Interface</b><br>QT5 GUI + RTSP/local filesystem + many widgets<br>
<b>3.11 Open Source and Community Involvement</b><br><ul><li>hope that others will deploy the system</li><li>serve as a comprehensive springboard</li><li>cultivate a community</li></ul>
Controlled Study
4.1 Overall Procedure<br>> five exemplary classrooms<br>> 5 instructors and 25 student participants<br>> 参与者按照事先提供的“指令表格”,依次按照相应的要求做出动作,同时debug系统会同时记录下这些动作的时刻、类型、以及图像数据
Fig. 10. Histogram showing the percent of different body keypoints found in three of our experimental contexts.<br>4.2 Body Keypointing<br>> Openpose被用来做姿态估计,但其在教室场景下的效果并不鲁棒,因此作者调整了算法的一些参数,外加一些pose的逻辑判断,提升了算法的稳定性和准确度(<b><font color="#f15a23">和我改进openpose的思路差不多?</font></b>)<br>> 关于改进后openpose的效果,作者也没给出较严谨的测试结果,只是在少量数据集统计了关键点的效果(<b><font color="#f15a23">这种方式有道理吗?</font></b>)<br>> 如上图,作者又统计了9种人体关键点的检测准确度,显然上半身比下半身的准确率要高(<font color="#f15a23"><b>但这些准确率是在多少数据下统计的不可知</b></font>)
4.3 Phase A: Hand Raises & Upper Body Pose<br>> 作者分了七种上身姿态类别:<b>arms resting, left hand raised, left hand raised partial, right hand raised, right hand raised partial, arms closed, and hands on face<br></b>> 参与实验的学生被要求在一堂课中,分别要执行三次这些姿态类别,共计21个实例<br>> 参与实验的老师被要求,分别要执行arms resting和arms closed三次,且在不同的教室位置(left front, center front, right front),共计6个实例<br>> <b><font color="#f15a23">We only studied frames where participants’ upper bodies were captured (consisting of head, chest, shoulder, elbow, and wrist <br>keypoints - without these eight keypoints, our hand raise classifier returns null).<br></font></b>> 另外,作者在文中提到的举手检测准确率高达94.6%,其它三类上身姿态检测准确率高达98.6%(学生)和100%(教师),<b><font color="#f15a23">但是没有提到训练集和测试集的规模,且这些都是在特定布置的实验场景中的结果,是否有说服力呢?</font></b><br>
Fig. 11. The mouth states captured in our controlled study: mouth closed, closed smile, teeth smile, and mouth open.<br>4.4 Phase B: Mouth State<br>> 作者设定了4种嘴部状体:<b>neutral (mouth closed), mouth open (teeth apart, as if talking), closed smile (no teeth showing), <br>teeth smile (with teeth showing)<br></b>> 参与学生被要求每种状态执行三次,共计12个实验样例;<br>> 参与教师被要求每种状态执行三次,且在教室前面的不同位置,共计12个实验样例<br>> 基于以上人脸landmarks检测,作者做了微笑分类(准确率78.6%和87.2%),以及张嘴分类(准确率83.6%和82.1%)。<b><font color="#f15a23">但是仍旧没提数据量<br></font></b>> 作者坦承,由于分辨率问题,后排的学生人脸几乎不可准确检测landmarks,并乐观地认为高分辨率相机可以解决该问题。(<b><font color="#f15a23">实际上我们测试,即使是4K相机,仍旧存在低分辨率问题,且landmarks还有大角度和遮挡的问题</font>)</b><br>
4.5 Phase C: Sit vs. Stand<br>> 这里作者主要是区分站立和坐下两种姿势。<br>> 同样按照前面的安排,学生参与者被要求在整个测试过程中,随机执行三次两种姿势,每个参与者共计6个实例;教师总是保持站立,本轮不参与<br>> 站立和坐下的分类准确率约为84.4%(尽管作者还是没提是在多大的数据集上测试的结果,但从这一章节提供的错误率推断出,<b><font color="#f15a23">总样例数量约为143</font></b>)<br>> 由于只是依赖2D关键点检测的结果来分类,作者提到这种方法受到相机视角的影响很大。(<font color="#f15a23"><b>那是当然,还是没有我们直接检测站立准确,且鲁棒性高</b></font>)<br>> 作者最后又提到,将来可以使用深度数据,改善这种情况。(<b><font color="#f15a23">我只能说深度相机也不见得有用,况且深度数据并不好采集和用来训练</font></b>)
Fig. 12. Example head orientations requested in our study, with detected face landmarks shown.<br>4.6 Phase D: Head Orientation<br>> 作者设定了8中头部朝向:three possible pitches (“down” -15°, “straight” 0°, “up” +15°) × three possible yaws (“left” -20°, <br>“straight” 0°, “right” +20°), omitting directly straight ahead (i.e., 0°/0°) (<b><font color="#f15a23">仍旧是将检测和估计问题,转化成了分类问题</font></b>)<br>> 为了让参与者做出相应的head pose,作者设计使用运行位姿估计APP的智能手机,以及打印出来操作表格贴在课桌上。相关流程请阅读论文<br>> 同样,学生参与者被要求分别执行8种头部方向2次,这样每个人会产生16个实验样例<br><span style="font-size: inherit;">> Unfortunately, in many frames we collected, ~20% of landmarks were occluded by the smartphones we gave participants - an experimental <br> design error in hindsight (</span><b style="font-size: inherit;"><font color="#f15a23">果不其然,这种依靠人脸landmarks的头部姿态估计方式,即使是在实验场景下,结果也并不靠谱</font></b><span style="font-size: inherit;">)<br></span>> Which should be sufficient for coarse estimation of attention (<b><font color="#f15a23">作者删除掉一些landmarks检测不好的样例,仅仅剩下了1/4的数据,在这种情况下测试的结果,还要说sufficient,有点勉强了,甚至睁眼说瞎话了</font></b>)<br>> 作者最后提到,主要问题还是出在landmarks的检测,将来能检测出来充足的landmarks点,就能解决头部朝向问题。(<b><font color="#f15a23">我对这种技术路线持保守态度</font></b>)<br><br>
4.7 Phase E: Speech Procedure<br>> 这里只是识别是否有说话,包括教师和学生<br>> 实验方案是要求30个参与者分别说一次话,这样说说话语音段可以提取出30个5秒钟长的clips,非说话语音段同样可以提出30个段,再对这些语音段做分类。最终,no speech的识别100%正确,speech的识别仅有一个错误,准确率98.3%<br>> <b><font color="#f15a23">我只能评价说,这样的语音指标和处理流程太过简单,且测试数据量太少,很缺乏说服力</font></b>
4.8 Face Landmarks Results<br>> 人脸关键点检测直接使用了公共算法,如文献[4][13][44]。猜测大概率使用的是文献[13](CMU的Openpose)<br>> 同样是在实验环境下,这一段展示了缺乏说服力的所谓关键点检测准确率<br>> poor registration of landmarks was due to limited resolution (还是提到了低分辨率的问题)
4.9 Classroom Position & Sensing Accuracy vs. Distance<br>> We manually recorded the distance of all participants from the camera using a <b>surveyors’ rope<br></b>> Computer-vision-driven modules are sensitive to <b>image resolution</b> and vary in accuracy as a function of distance from the camera.<br>> 这里有个疑问:<b><font color="#f15a23">教师和学生的检测不会重复吗?换句话说双方不会出现在彼此的镜头里面吗?如果出现了,文中并没有考虑如何区分两者。</font></b>
Fig. 15. Runtime performance of EduSense’s various processing stages<br> at different loads (i.e., number of students).<br>4.10 Framerate and Latency<br>> 测试阶段,只考虑处理已保存的视频数据,暂不考虑实时系统<br>> 不出意外,关键点检测(body keypointing)和人脸关键点检测(face landmarking)两种基础映射函数占据了大部分时间。尤其是人脸关键点定位算法的耗时, 和图像中的人物数量呈正相关函数增长.(<b><font color="#f15a23">这里有点疑问,姿态估计使用的是Bottom-up的openpose算法,所以检测耗时不随人数增长而简单地线性增长,但上图中,人数从0增加到54,检测耗时完全没有增加,这显然是假的。因为我实测过,openpose在joints grouping环节,也会占据部分CPU时间。另外,openpose算法本身的检测耗时只有约几十毫秒,也不可信,输入图像即使只有1K图像的0.5倍大小,也需要1秒左右的时间。</font></b>)<br>> 其他处理流程的耗时,暂看不出问题<br>
Real-world Classrooms Study
5.1 Deployment and Procedure<br>> We deployed EduSense in <b>13 classrooms</b> at our institution and recruited <b>22 courses</b> for an "in-the-wild" evaluation <br>(with a total student enrollment of <b>687</b>).<br>><b> 360.8 hours</b> of classroom data<br>> <b>438,331 student-facing frames</b> and <b>733,517 instructor-facing frames</b> were processed live, with a further <b>18.3M frames</b> infilled <br>after class to bring the entire corpus up to a 15 FPS temporal resolution.<br>> We randomly pulled <b>100 student-view frames</b> (containing <b>1797 student body instances</b>) and <b>300 instructor-view frames</b> <br>(containing <b>291 instructor body instances</b>; i.e., nine frames did not contain instructors) from our corpus.<br>> This suset is sufficiently large and diverse (<b><font color="#f15a23">不敢苟同</font></b>)<br>> To provide the ground truth labels, we hired two human coders, who were not involved in the project. (<b><font color="#f15a23">和我们的数据标注工作相比,EduSense这点工作量很单薄</font></b>)<br>> It was not possible to accurately label head orientation and classroom position (<font color="#f15a23"><b>有很多指标是粗略估计,但是位置如果采用我们的行列表示来评估,会更精确测量和评价</b>)</font><br>
5.2 Body Keypointing Results<br>> EduSense found 92.2% of student bodies and 99.6% of instructor bodies. (<b>实际教室场景中的测试还是囿于少量数据之中,缺乏说服力</b>)<br>> 59.0% of student and 21.0% of instructor body instances were found to have at least one visible keypoint misalignment (<b>真实效果不一定好</b>)<br>> We were surprised that our real-world results were comparable to our controlled study, despite operating in seemingly much more <br>challenging scenes (<b><font color="#f15a23">作者分析,和实验场景中刻意控制的复杂姿势和头部朝向相比,真实场景尽管更混乱(chaotic),但学生们一般都是直视前方,且姿态总是保持倚在课桌上,更容易识别</font></b>)<br>
5.3 Face Landmarking Results<br>> 仍旧是在部分数据集上分别统计了学生和老师的人脸检测准确率,以及相应的人脸关键点定位准确率 (<b><font color="#f15a23">缺乏在大规模标注的数据集上的测试结果</font></b>)<br>> 作者提到尽管真实场景更复杂,人脸检测算法的结果还是相当鲁棒的(<b><font color="#f15a23">这是公共算法的功劳,这里提及的意义何在?</font></b>)
5.4 Hand Raise Detection & Upper Body Pose Classification<br>> Hand raises in our real-world dataset were exceedingly rare (<b><font color="#f15a23">毫无意外,上述22个视频的测试量,以及大学课堂场景,注定了举手样例是稀缺的</font></b>)<br>> Our of our 1797 student body instances, we only found <b>6 body instances</b> with hand raised (representing. less 0.3% of total body instances). <br>Of those six hand raised instances, EduSense <b>correctly labeled three</b>, <b>incorrectly labeled three</b>, and missed zero, for an overall true positive <br>accuracy of 50.0%. There was also <b>58 false positive</b> hand raised instances (3.8% of total body instances). (<b><font color="#f15a23">举手姿势的效果惨不忍睹</font></b>)<br>> 其他姿势的实测效果也不是很好,且同样存在数据量少、缺乏说服力的缺陷<br>
5.5 Mouth Smile and Open Detection<br>> Only 17.1% of student body instances had the requisite mouth landmarks present for EduSense’s smile detector to execute. (<b><font color="#f15a23">有效数据更少了</font></b>) --(Student)smile vs. no smile classification accuracy was 77.1%<br>> Only 21.0% of instructor body instances having the required facial landmarks. (<b><font color="#f15a23">同样少了很多测试数据</font></b>) --(Instructor)smile vs. no smile classification accuracy was 72.6%<br>> mouth open/closed detection, accuracy was stronger – 96.5%(Student)和 82.3%(Instuctor)(注意,其中大部分都是闭嘴的样例,约占94.8%)(<b><font color="#f15a23">这里作者分析道:张嘴检测和微笑相比,更不易察觉</font></b>)<br>> 最后,作者还是提到分辨率的问题,张嘴/闭嘴检测还是强依赖嘴的分辨率,另外,标注者对张嘴的判断也有会有主观性的(subjective)干扰。所以,这个指标只是初步性的(preliminary)
5.6 Sit vs. Stand Classification<br>> We found that a vast majority of student lower bodies were occluded, which did not permit our classifier to produce a sit/stand classification,<br> and thus we omit these results (<b>实际测试阶段,没有包括学生的坐下/站立分类指标)<br></b>> 教师也只有66.3%的帧能被检测到下半身关键点,其中坐下和站立的识别准确率粉笔是90.5%和95.2%(<b>数据量较少,可信度如何?</b>)<br>
5.7 Speech/Silence & Student/Instructor Detection<br>> 关于Speech/Silence分类,作者分别选择了"50段5秒长的有声"和"50段5秒长的无声",用来测试准确率,最终结果是82%<br>> 关于Student/Instructor Detection,作者的方法是选择”25段10秒长的教师声“和”25段10秒长的学生音“,结果只有60%的准确率能分别说话者(<b><font color="#f15a23">意料之中,接近50%的随机猜测概率</font></b>)<br>> 作者认为,现阶段的说话人识别受到教室的结构和麦克风采集位置的影响很大,而仅有两个语音采集设备也是不够的。想解决该问题只能引入更复杂的方法:<b><font color="#f15a23">说话人识别 speaker identification</font></b>
5.8 Framerate & Latency<br>> 详细的耗时分析参见Figure 15<br>> We achieve a mean student view processing framerate of between 0.3 and 2.0 FPS. (现阶段<b>线下视频</b>的处理速度有这么快吗?)教师路2-3 times faster<br>> 根据耗时分析,<b>实时系统</b>的处理延时为3~5秒,其中各个部分耗时长短依次是:<b>IP cameras > backend proccessing > storing results > transmission (wired network)<br></b>> 作者认为,未来更高端的 IP cameras将会减少时延,促进实时系统的大规模应用(<b>5G + 高端嵌入式摄像头处理芯片?</b>)
End-user Applications
Fig. 16. Preliminary classroom data visualization app.
<span style="font-size: inherit;">> Our future goal with EduSense is to power a suite of end-user, data-driven applications.<br></span>> 如何设计前端的展示页面,也很讲究,作者提出了几种可能的选择<br><ul><li>tracking the elapsed time of continuous speech, to help instructors inject lectures with pauses, as well as opportunities for student <br>questions and discussion. (<b><font color="#f15a23">教师音检测+计时?</font></b>)</li><li>automatically generated include suggestions to increase movement at the front of the class (<b><font color="#f15a23">教师轨迹?</font></b>)</li><li>and modify the ratio of facing the board vs. facing the class. (<b><font color="#f15a23">教师朝向比例?</font></b>)</li><li>a cumulative heatmap of all student hand raised thus far in the lecture, which could facilitate selecting a student who has yet to <br>contribute (<b><font color="#f15a23">举手热力图?</font></b>)</li><li><span style="font-size: inherit;">a histogram of the instructor's gaze could highlight areas of the classroom receiving less visual attention (</span><b style="font-size: inherit;"><font color="#f15a23">教师视线追踪+统计?</font></b><span style="font-size: inherit;">)<br><br></span></li></ul><span style="font-size: inherit;">> 除了课上提供实时反馈的系统设计,课下和每个学期末的分析总结报告,也很重要(<b>制作成PDFs,并email给特定人群</b>)<br></span>> 紧接着,作者继续重申EduSense检测教师指标并提供实时意见可能起到的积极作用(包括gaze direction [65], gesticulation though hand movement [81], smiling [65], and moving around the classroom [55][70]) <br>><b> A web-based data visualizer (Figure 16): </b>Node.js + ECharts + React (<b><font color="#f15a23">前端框架</font></b>)<br>
Discussion
<span style="font-size: inherit;">> Taken together, our controlled and real classroom studies offer the first comprehensive evaluation of a holistic audio- and computer-vision-driven classroom sensing system, offering new insights into the feasibility of automated class analytics (</span><b style="font-size: inherit;"><font color="#f15a23">句子很长,口气很大</font></b><span style="font-size: inherit;">)<br></span>> 经过实验和实测,作者给出了一些布置应用场景的建议:比如不要选择过大的教室(前后最大不要超过8M);摄像头安装在合适的位置提供好的教室视角<br>> 作者指出,系统中的算法错误具有传递效应,文中已经分阶段分部分阐述了各个模块的上限和下限。<br>> 作者接着指出,系统还有很多工作要做,系统的完善需要公共社区的研究不断提供帮助,且需要同大学和高中等终端使用者们多接触多沟通。<br>> We also envisio(展望、想象) EduSense as a stepping stone towards the furthering of a university culture that values professional development for teaching (<b><font color="#f15a23">美好的愿景,同时也是我们对自己在做的系统的愿景</font></b>)<br><br>
Conclusion
<ul><li><span style="font-size: inherit;">1. We have presented our work on EduSense, a comprehensive classroom sensing system that produces a wide variety of theoretically-motivated features, using a distributed array of commodity cameras. (<font color="#f15a23"><b>贡献</b></font>)</span></li><li>2. We deployed and tested our system in a controlled study, as well as real classrooms, quantifying the accuracy of key system features in <br>both settings. (<b><font color="#f15a23">分析</font></b>)</li><li>3. We believe EduSense is an important step towards the vision of automated classroom analytics, which hold the promise of offering a <br>fidelity, scale and temporal resolution, which are impractical with the current practice of in-class observers. (<b><font color="#f15a23">愿景</font></b>)</li><li>4. To further our goal of an extensible platform for classroom sensing that others can also build on, EduSense is open sourced and <br>available to the community. (<b><font color="#f15a23">号召</font></b>)</li></ul><br>
0 条评论
下一页