Details

Handbook of Human-Machine Systems


Handbook of Human-Machine Systems


IEEE Press Series on Human-Machine Systems 1. Aufl.

von: Giancarlo Fortino, David Kaber, Andreas Nürnberger, David Mendonça

126,99 €

Verlag: Wiley
Format: EPUB
Veröffentl.: 04.07.2023
ISBN/EAN: 9781119863656
Sprache: englisch
Anzahl Seiten: 528

DRM-geschütztes eBook, Sie benötigen z.B. Adobe Digital Editions und eine Adobe ID zum Lesen.

Beschreibungen

<b>Handbook of Human-Machine Systems</b> <p><b>Insightful and cutting-edge discussions of recent developments in human-machine systems</b> <p>In <i>Handbook of Human-Machine Systems</i>, a team of distinguished researchers delivers a comprehensive exploration of human-machine systems (HMS) research and development from a variety of illuminating perspectives. The book offers a big picture look at state-of-the-art research and technology in the area of HMS. Contributing authors cover Brain-Machine Interfaces and Systems, including assistive technologies like devices used to improve locomotion. They also discuss advances in the scientific and engineering foundations of Collaborative Intelligent Systems and Applications. <p>Companion technology, which combines trans-disciplinary research in fields like computer science, AI, and cognitive science, is explored alongside the applications of human cognition in intelligent and artificially intelligent system designs, human factors engineering, and various aspects of interactive and wearable computers and systems. The book also includes: <ul><li>A thorough introduction to human-machine systems via the use of emblematic use cases, as well as discussions of potential future research challenges</li> <li>Comprehensive explorations of hybrid technologies, which focus on transversal aspects of human-machine systems</li> <li>Practical discussions of human-machine cooperation principles and methods for the design and evaluation of a brain-computer interface</li></ul> <p>Perfect for academic and technical researchers with an interest in HMS, <i>Handbook of Human-Machine Systems</i> will also earn a place in the libraries of technical professionals practicing in areas including computer science, artificial intelligence, cognitive science, engineering, psychology, and neurobiology.
<p>Editors Biography xxi</p> <p>List of Contributors xxiii</p> <p>Preface xxxiii</p> <p><b>1 Introduction 1<br /> </b><i>Giancarlo Fortino, David Kaber, Andreas Nürnberger, and David Mendonça</i></p> <p>1.1 Book Rationale 1</p> <p>1.2 Chapters Overview 2</p> <p>Acknowledgments 8</p> <p>References 8</p> <p><b>2 Brain–Computer Interfaces: Recent Advances, Challenges, and Future Directions 11<br /> </b><i>Tiago H. Falk, Christoph Guger, and Ivan Volosyak</i></p> <p>2.1 Introduction 11</p> <p>2.2 Background 12</p> <p>2.2.1 Active/Reactive BCIs 13</p> <p>2.2.2 Passive BCIs 14</p> <p>2.2.3 Hybrid BCIs 15</p> <p>2.3 Recent Advances and Applications 15</p> <p>2.3.1 Active/Reactive BCIs 15</p> <p>2.3.2 Passive BCIs 16</p> <p>2.3.3 Hybrid BCIs 16</p> <p>2.4 Future Research Challenges 16</p> <p>2.4.1 Current Research Issues 17</p> <p>2.4.2 Future Research Directions 17</p> <p>2.5 Conclusions 18</p> <p>References 18</p> <p><b>3 Brain–Computer Interfaces for Affective Neurofeedback Applications 23<br /> </b><i>Lucas R. Trambaiolli and Tiago H. Falk</i></p> <p>3.1 Introduction 23</p> <p>3.2 Background 23</p> <p>3.3 State-of-the-Art 24</p> <p>3.3.1 Depressive Disorder 25</p> <p>3.3.2 Posttraumatic Stress Disorder, PTSD 26</p> <p>3.4 Future Research Challenges 27</p> <p>3.4.1 Open Challenges 27</p> <p>3.4.2 Future Directions 28</p> <p>3.5 Conclusion 28</p> <p>References 29</p> <p><b>4 Pediatric Brain–Computer Interfaces: An Unmet Need 35<br /> </b><i>Eli Kinney-Lang, Erica D. Floreani, Niloufaralsadat Hashemi, Dion Kelly, Stefanie S. Bradley, Christine Horner, Brian Irvine, Zeanna Jadavji, Danette Rowley, Ilyas Sadybekov, Si Long Jenny Tou, Ephrem Zewdie, Tom Chau, and Adam Kirton</i></p> <p>4.1 Introduction 35</p> <p>4.1.1 Motivation 36</p> <p>4.2 Background 36</p> <p>4.2.1 Components of a BCI 36</p> <p>4.2.1.1 Signal Acquisition 36</p> <p>4.2.1.2 Signal Processing 36</p> <p>4.2.1.3 Feedback 36</p> <p>4.2.1.4 Paradigms 37</p> <p>4.2.2 Brain Anatomy and Physiology 37</p> <p>4.2.3 Developmental Neurophysiology 38</p> <p>4.2.4 Clinical Translation of BCI 38</p> <p>4.2.4.1 Assistive Technology (AT) 38</p> <p>4.2.4.2 Clinical Assessment 39</p> <p>4.3 Current Body of Knowledge 39</p> <p>4.4 Considerations for Pediatric BCI 40</p> <p>4.4.1 Developmental Impact on EEG-based BCI 40</p> <p>4.4.2 Hardware for Pediatric BCI 41</p> <p>4.4.3 Signal Processing for Pediatric BCI 41</p> <p>4.4.3.1 Feature Extraction, Selection and Classification 42</p> <p>4.4.3.2 Emerging Techniques 42</p> <p>4.4.4 Designing Experiments for Pediatric BCI 43</p> <p>4.4.5 Meaningful Applications for Pediatric BCI 43</p> <p>4.4.6 Clinical Translation of Pediatric BCI 44</p> <p>4.5 Conclusions 44</p> <p>References 45</p> <p><b>5 Brain–Computer Interface-based Predator–Prey Drone Interactions 49<br /> </b><i>Abdelkader Nasreddine Belkacem and Abderrahmane Lakas</i></p> <p>5.1 Introduction 49</p> <p>5.2 Related Work 50</p> <p>5.3 Predator–Prey Drone Interaction 51</p> <p>5.4 Conclusion and Future Challenges 57</p> <p>References 58</p> <p><b>6 Levels of Cooperation in Human–Machine Systems: A Human–BCI–Robot Example 61<br /> </b><i>Marie-Pierre Pacaux-Lemoine, Lydia Habib, and Tom Carlson</i></p> <p>6.1 Introduction 61</p> <p>6.2 Levels of Cooperation 61</p> <p>6.3 Application to the Control of a Robot by Thought 63</p> <p>6.3.1 Designing the System 64</p> <p>6.3.2 Experiments and Results 66</p> <p>6.4 Results from the Methodological Point of View 67</p> <p>6.5 Conclusion and Perspectives 68</p> <p>References 69</p> <p><b>7 Human–Machine Social Systems: Test and Validation via Military Use Cases 71<br /> </b><i>Charlene K. Stokes, Monika Lohani, Arwen H. DeCostanza, and Elliot Loh</i></p> <p>7.1 Introduction 71</p> <p>7.2 Background Summary: From Tools to Teammates 72</p> <p>7.2.1 Two Sides of the Equation 72</p> <p>7.2.2 Moving Beyond the Cognitive Revolution 73</p> <p>7.2.2.1 A Rediscovery of the Unconscious 74</p> <p>7.3 Future Research Directions 75</p> <p>7.3.1 Machine: Functional Designs 75</p> <p>7.3.2 Human: Ground Truth 76</p> <p>7.3.2.1 Physiological Computing 76</p> <p>7.3.3 Context: Tying It All Together 77</p> <p>7.3.3.1 Training and Team Models 77</p> <p>7.4 Conclusion 79</p> <p>References 79</p> <p><b>8 The Role of Multimodal Data for Modeling Communication in Artificial Social Agents 83<br /> </b><i>Stephanie Gross and Brigitte Krenn</i></p> <p>8.1 Introduction 83</p> <p>8.2 Background 84</p> <p>8.2.1 Context 84</p> <p>8.2.2 Basic Definitions 84</p> <p>8.3 Related Work 84</p> <p>8.3.1 HHI Data 85</p> <p>8.3.2 HRI Data 85</p> <p>8.3.2.1 Joint Attention and Robot Turn-Taking Capabilities 85</p> <p>8.3.3 Public Availability of the Data 87</p> <p>8.4 Datasets and Resulting Implications 87</p> <p>8.4.1 Human Communicative Signals 87</p> <p>8.4.1.1 Experimental Setup 87</p> <p>8.4.1.2 Data Analysis and Results 88</p> <p>8.4.2 Humans Reacting to Robot Signals 89</p> <p>8.4.2.1 Comparing Different Robotic Turn-Giving Signals 89</p> <p>8.4.2.2 Comparing Different Transparency Mechanisms 90</p> <p>8.5 Conclusions 91</p> <p>8.6 Future Research Challenges 91</p> <p>References 91</p> <p><b>9 Modeling Interactions Happening in People-Driven Collaborative Processes 95<br /> </b><i>Maximiliano Canche, Sergio F. Ochoa, Daniel Perovich, and Rodrigo Santos</i></p> <p>9.1 Introduction 95</p> <p>9.2 Background 97</p> <p>9.3 State-of-the-Art in Interaction Modeling Languages and Notations 98</p> <p>9.3.1 Visual Languages and Notations 99</p> <p>9.3.2 Comparison of Interaction Modeling Languages and Notations 100</p> <p>9.4 Challenges and Future Research Directions 101</p> <p>References 102</p> <p><b>10 Transparent Communications for Human–Machine Teaming 105<br /> </b><i>JessieY.C.Chen</i></p> <p>10.1 Introduction 105</p> <p>10.2 Definitions and Frameworks 105</p> <p>10.3 Implementation of Transparent Human–Machine Interfaces in Intelligent Systems 106</p> <p>10.3.1 Human–Robot Interaction 106</p> <p>10.3.2 Multiagent Systems and Human–Swarm Interaction 108</p> <p>10.3.3 Automated/Autonomous Driving 109</p> <p>10.3.4 Explainable AI-Based Systems 109</p> <p>10.3.5 Guidelines and Assessment Methods 109</p> <p>10.4 Future Research Directions 110</p> <p>References 111</p> <p><b>11 Conversational Human–Machine Interfaces 115<br /> </b><i>María Jesús Rodríguez-Sánchez, Kawtar Benghazi, David Griol, and Zoraida Callejas</i></p> <p>11.1 Introduction 115</p> <p>11.2 Background 115</p> <p>11.2.1 History of the Development of the Field 116</p> <p>11.2.2 Basic Definitions 117</p> <p>11.3 State-of-the-Art 117</p> <p>11.3.1 Discussion of the Most Important Scientific/Technical Contributions 117</p> <p>11.3.2 Comparison Table 119</p> <p>11.4 Future Research Challenges 121</p> <p>11.4.1 Current Research Issues 121</p> <p>11.4.2 Future Research Directions Dealing with the Current Issues 121</p> <p>References 122</p> <p><b>12 Interaction-Centered Design: An Enduring Strategy and Methodology for Sociotechnical Systems 125<br /> </b><i>Ming Hou, Scott Fang, Wenbi Wang, and Philip S. E. Farrell</i></p> <p>12.1 Introduction 125</p> <p>12.2 Evolution of HMS Design Strategy 126</p> <p>12.2.1 A HMS Technology: Intelligent Adaptive System 126</p> <p>12.2.2 Evolution of IAS Design Strategy 128</p> <p>12.3 State-of-the-Art: Interaction-Centered Design 130</p> <p>12.3.1 A Generic Agent-based ICD Framework 130</p> <p>12.3.2 IMPACTS: An Human–Machine Teaming Trust Model 132</p> <p>12.3.3 ICD Roadmap for IAS Design and Development 133</p> <p>12.3.4 ICD Validation, Adoption, and Contributions 134</p> <p>12.4 IAS Design Challenges and Future Work 135</p> <p>12.4.1 Challenges of HMS Technology 136</p> <p>12.4.2 Future Work in IAS Design and Validation 136</p> <p>References 137</p> <p><b>13 Human–Machine Computing: Paradigm, Challenges, and Practices 141<br /> </b><i>Zhiwen Yu, Qingyang Li, and Bin Guo</i></p> <p>13.1 Introduction 141</p> <p>13.2 Background 142</p> <p>13.2.1 History of the Development 142</p> <p>13.2.2 Basic Definitions 143</p> <p>13.3 State of the Art 144</p> <p>13.3.1 Technical Contributions 144</p> <p>13.3.2 Comparison Table 148</p> <p>13.4 Future Research Challenges 150</p> <p>13.4.1 Current Research Issues 150</p> <p>13.4.2 Future Research Directions 151</p> <p>References 152</p> <p><b>14 Companion Technology 155<br /> </b><i>Andreas Wendemuth</i></p> <p>14.1 Introduction 155</p> <p>14.2 Background 155</p> <p>14.2.1 History 156</p> <p>14.2.2 Basic Definitions 157</p> <p>14.3 State-of-the-Art 158</p> <p>14.3.1 Discussion of the Most Important Scientific/Technical Contributions 159</p> <p>14.4 Future Research Challenges 159</p> <p>14.4.1 Current Research Issues 159</p> <p>14.4.2 Future Research Directions Dealing with the Current Issues 160</p> <p>References 161</p> <p><b>15 A Survey on Rollator-Type Mobility Assistance Robots 165<br /> </b><i>Milad Geravand, Christian Werner, Klaus Hauer, and Angelika Peer</i></p> <p>15.1 Introduction 165</p> <p>15.2 Mobility Assistance Platforms 165</p> <p>15.2.1 Actuation 166</p> <p>15.2.2 Kinematics 166</p> <p>15.2.2.1 Locomotion Support 166</p> <p>15.2.2.2 STS Support 166</p> <p>15.2.3 Sensors 168</p> <p>15.2.4 Human–Machine Interfaces 168</p> <p>15.3 Functionalities 168</p> <p>15.3.1 STS Assistance 169</p> <p>15.3.2 Walking Assistance 169</p> <p>15.3.2.1 Maneuverability Improvement 169</p> <p>15.3.2.2 Gravity Compensation 170</p> <p>15.3.2.3 Obstacle Avoidance 170</p> <p>15.3.2.4 Falls Risk Prediction and Fall Prevention 170</p> <p>15.3.3 Localization and Navigation 170</p> <p>15.3.3.1 Map Building and Localization 171</p> <p>15.3.3.2 Path Planning 171</p> <p>15.3.3.3 Assisted Localization 171</p> <p>15.3.3.4 Assisted Navigation 171</p> <p>15.3.4 Further Functionalities 171</p> <p>15.3.4.1 Reminder Systems 171</p> <p>15.3.4.2 Health Monitoring 171</p> <p>15.3.4.3 Communication, Information, Entertainment, and Training 172</p> <p>15.4 Conclusion 172</p> <p>References 173</p> <p><b>16 A Wearable Affective Robot 181<br /> </b><i>Jia Liu, Jinfeng Xu, Min Chen, and Iztok Humar</i></p> <p>16.1 Introduction 181</p> <p>16.2 Architecture Design and Characteristics 183</p> <p>16.2.1 Architecture of a Wearable Affective Robot 183</p> <p>16.2.2 Characteristics of a Wearable Affective Robot 184</p> <p>16.3 Design of the Wearable, Affective Robot’s Hardware 185</p> <p>16.3.1 AIWAC Box Hardware Design 185</p> <p>16.3.2 Hardware Design of the EEG Acquisition 185</p> <p>16.3.3 AIWAC Smart Tactile Device 185</p> <p>16.3.4 Prototype of the Wearable Affective Robot 186</p> <p>16.4 Algorithm for the Wearable Affective Robot 186</p> <p>16.4.1 Algorithm for Affective Recognition 186</p> <p>16.4.2 User-Behavior Perception based on a Brain-Wearable Device 186</p> <p>16.5 Life Modeling of the Wearable Affective Robot 187</p> <p>16.5.1 Data Set Labeling and Processing 188</p> <p>16.5.2 Multidimensional Data Integration 188</p> <p>16.5.3 Modeling of Associated Scenarios 188</p> <p>16.6 Challenges and Prospects 189</p> <p>16.6.1 Research Challenges of the Wearable Affective Robot 189</p> <p>16.6.2 Application Scenarios for the Wearable Affective Robot 189</p> <p>16.7 Conclusions 190</p> <p>References 190</p> <p><b>17 Visual Human–Computer Interactions for Intelligent Vehicles 193<br /> </b><i>Xumeng Wang, Wei Chen, and Fei-Yue Wang</i></p> <p>17.1 Introduction 193</p> <p>17.2 Background 193</p> <p>17.3 State-of-the-Art 194</p> <p>17.3.1 VHCI in Vehicles 194</p> <p>17.3.1.1 Information Feedback from Intelligent Vehicles 195</p> <p>17.3.1.2 Human-Guided Driving 195</p> <p>17.3.2 VHCI Among Vehicles 195</p> <p>17.3.3 VHCI Beyond Vehicles 195</p> <p>17.4 Future Research Challenges 196</p> <p>17.4.1 VHCI for Intelligent Vehicles 196</p> <p>17.4.1.1 Vehicle Development 196</p> <p>17.4.1.2 Vehicle Manufacture 197</p> <p>17.4.1.3 Preference Recording 197</p> <p>17.4.1.4 Vehicle Usage 197</p> <p>17.4.2 VHCI for Intelligent Transportation Systems 198</p> <p>17.4.2.1 Parallel World 198</p> <p>17.4.2.2 The Framework of Intelligent Transportation Systems 198</p> <p>References 199</p> <p><b>18 Intelligent Collaboration Between Humans and Robots 203<br /> </b><i>Andrea Maria Zanchettin</i></p> <p>18.1 Introduction 203</p> <p>18.2 Background 203</p> <p>18.2.1 Context 203</p> <p>18.2.2 Basic Definitions 204</p> <p>18.3 Related Work 205</p> <p>18.4 Validation Cases 206</p> <p>18.4.1 A Simple Verification Scenario 207</p> <p>18.4.2 Activity Recognition Based on Semantic Hand-Object Interaction 208</p> <p>18.5 Conclusions 210</p> <p>18.6 Future Research Challenges 210</p> <p>References 210</p> <p><b>19 To Be Trustworthy and To Trust: The New Frontier of Intelligent Systems 213<br /> </b><i>Rino Falcone, Alessandro Sapienza, Filippo Cantucci, and Cristiano Castelfranchi</i></p> <p>19.1 Introduction 213</p> <p>19.2 Background 214</p> <p>19.3 Basic Definitions 214</p> <p>19.4 State-of-the-Art 215</p> <p>19.4.1 Trust in Different Domains 215</p> <p>19.4.2 Selected Articles 215</p> <p>19.4.3 Differences in the Use of Trust 216</p> <p>19.4.4 Approaches to Model Trust 217</p> <p>19.4.5 Sources of Trust 218</p> <p>19.4.6 Different Computational Models of Trust 218</p> <p>19.5 Future Research Challenges 220</p> <p>References 221</p> <p><b>20 Decoding Humans’ and Virtual Agents’ Emotional Expressions 225<br /> </b><i>Terry Amorese, Gennaro Cordasco, Marialucia Cuciniello, Olga Shevaleva, Stefano Marrone, Carl Vogel, and Anna Esposito</i></p> <p>20.1 Introduction 225</p> <p>20.2 Related Work 226</p> <p>20.3 Materials and Methodology 227</p> <p>20.3.1 Participants 227</p> <p>20.3.2 Stimuli 228</p> <p>20.3.3 Tools and Procedures 228</p> <p>20.4 Descriptive Statistics 229</p> <p>20.5 Data Analysis and Results 230</p> <p>20.5.1 Comparison Synthetic vs. Naturalistic Experiment 234</p> <p>20.6 Discussion and Conclusions 235</p> <p>Acknowledgment 238</p> <p>References 238</p> <p><b>21 Intelligent Computational Edge: From Pervasive Computing and Internet of Things to Computing Continuum 241<br /> </b><i>Radmila Juric</i></p> <p>21.1 Introduction 241</p> <p>21.2 The Journey of Pervasive Computing 242</p> <p>21.3 The Power of the IoT 243</p> <p>21.3.1 Inherent Problems with the IoT 244</p> <p>21.4 IoT: The Journey from Cloud to Edge 245</p> <p>21.5 Toward Intelligent Computational Edge 246</p> <p>21.6 Is Computing Continuum the Answer? 247</p> <p>21.7 Do We Have More Questions than Answers? 248</p> <p>21.8 What Would our Vision Be? 249</p> <p>References 251</p> <p><b>22 Implementing Context Awareness in Autonomous Vehicles 257<br /> </b><i>Federico Faruffini, Alessandro Correa-Victorino, and Marie-Hélène Abel</i></p> <p>22.1 Introduction 257</p> <p>22.2 Background 258</p> <p>22.2.1 Ontologies 258</p> <p>22.2.2 Autonomous Driving 258</p> <p>22.2.3 Basic Definitions 259</p> <p>22.3 Related Works 260</p> <p>22.4 Implementation and Tests 261</p> <p>22.4.1 Implementing the Context of Navigation 261</p> <p>22.4.2 Control Loop Rule 262</p> <p>22.4.3 Simulations 263</p> <p>22.5 Conclusions 264</p> <p>22.6 Future Research Challenges 264</p> <p>References 264</p> <p><b>23 The Augmented Workforce: A Systematic Review of Operator Assistance Systems 267<br /> </b><i>Elisa Roth, Mirco Moencks, and Thomas Bohné</i></p> <p>23.1 Introduction 267</p> <p>23.2 Background 268</p> <p>23.2.1 Definitions 268</p> <p>23.3 State of the Art 269</p> <p>23.3.1 Empirical Considerations 270</p> <p>23.3.1.1 Application Areas 270</p> <p>23.3.2 Assistance Capabilities 270</p> <p>23.3.2.1 Task Guidance 271</p> <p>23.3.2.2 Knowledge Management 271</p> <p>23.3.2.3 Monitoring 273</p> <p>23.3.2.4 Communication 273</p> <p>23.3.2.5 Decision-Making 273</p> <p>23.3.3 Meta-capabilities 274</p> <p>23.3.3.1 Configuration Flexibility 274</p> <p>23.3.3.2 Interoperability 274</p> <p>23.3.3.3 Content Authoring 274</p> <p>23.3.3.4 Initiation 274</p> <p>23.3.3.5 Hardware 275</p> <p>23.3.3.6 User Interfaces 275</p> <p>23.4 Future Research Directions 275</p> <p>23.4.1 Empirical Evidence 275</p> <p>23.4.2 Collaborative Research 277</p> <p>23.4.3 Systemic Approaches 277</p> <p>23.4.4 Technology-Mediated Learning 277</p> <p>23.5 Conclusion 277</p> <p>References 278</p> <p><b>24 Cognitive Performance Modeling 281<br /> </b><i>Maryam Zahabi and Junho Park</i></p> <p>24.1 Introduction 281</p> <p>24.2 Background 281</p> <p>24.3 State-of-the-Art 282</p> <p>24.4 Current Research Issues 286</p> <p>24.5 Future Research Directions Dealing with the Current Issues 286</p> <p>References 287</p> <p><b>25 Advanced Driver Assistance Systems: Transparency and Driver Performance Effects 291<br /> </b><i>Yulin Deng and David B. Kaber</i></p> <p>25.1 Introduction 291</p> <p>25.2 Background 292</p> <p>25.2.1 Context 292</p> <p>25.2.2 Basic Definition 292</p> <p>25.3 Related Work 293</p> <p>25.4 Method 294</p> <p>25.4.1 Apparatus 295</p> <p>25.4.2 Participants 296</p> <p>25.4.3 Experiment Design 296</p> <p>25.4.4 Tasks 297</p> <p>25.4.5 Dependent Variables 297</p> <p>25.4.5.1 Hazard Negotiation Performance 297</p> <p>25.4.5.2 Vehicle Control Performance 298</p> <p>25.4.6 Procedure 298</p> <p>25.5 Results 299</p> <p>25.5.1 Hazard Reaction Performance 299</p> <p>25.5.2 Posthazard Manual Driving Performance 299</p> <p>25.5.3 Posttesting Usability Questionnaire 301</p> <p>25.6 Discussion 302</p> <p>25.7 Conclusion 303</p> <p>25.8 Future Research 304</p> <p>References 304</p> <p><b>26 RGB-D Based Human Action Recognition: From Handcrafted to Deep Learning 307<br /> </b><i>Bangli Liu and Honghai Liu</i></p> <p>26.1 Introduction 307</p> <p>26.2 RGB-D Sensors and 3D Data 307</p> <p>26.3 Human Action Recognition via Handcrafted Methods 308</p> <p>26.3.1 Skeleton-Based Methods 308</p> <p>26.3.2 Depth-Based Methods 309</p> <p>26.3.3 Hybrid Feature-Based Methods 309</p> <p>26.4 Human Action Recognition via Deep Learning Methods 310</p> <p>26.4.1 CNN-Based Methods 310</p> <p>26.4.2 RNN-Based Methods 311</p> <p>26.4.3 GCN-Based Methods 313</p> <p>26.5 Discussion 314</p> <p>26.6 RGB-D Datasets 314</p> <p>26.7 Conclusion and Future Directions 315</p> <p>References 316</p> <p><b>27 Hybrid Intelligence: Augmenting Employees’ Decision-Making with AI-Based Applications 321<br /> </b><i>Ina Heine, Thomas Hellebrandt, Louis Huebser, and Marcos Padrón</i></p> <p>27.1 Introduction 321</p> <p>27.2 Background 321</p> <p>27.2.1 Context 321</p> <p>27.2.2 Basic Definitions 322</p> <p>27.3 Related Work 323</p> <p>27.4 Technical Part of the Chapter 324</p> <p>27.4.1 Description of the Use Case 324</p> <p>27.4.1.1 Business Model 324</p> <p>27.4.1.2 Process 324</p> <p>27.4.1.3 Use Case Objectives 325</p> <p>27.4.2 Description of the Envisioned Solution 325</p> <p>27.4.3 Development Approach of AI Application 326</p> <p>27.4.3.1 Development Process 326</p> <p>27.4.3.2 Process Analysis and Time Study 326</p> <p>27.4.3.3 Development and Deployment Data 327</p> <p>27.4.3.4 System Testing and Deployment 327</p> <p>27.4.3.5 Development Infrastructure and Development Cost Monitoring 327</p> <p>27.5 Conclusions 330</p> <p>27.6 Future Research Challenges 330</p> <p>References 330</p> <p><b>28 Human Factors in Driving 333<br /> </b><i>Birsen Donmez, Dengbo He, and Holland M. Vasquez</i></p> <p>28.1 Introduction 333</p> <p>28.2 Research Methodologies 334</p> <p>28.3 In-Vehicle Electronic Devices 335</p> <p>28.3.1 Distraction 335</p> <p>28.3.2 Interaction Modality 336</p> <p>28.3.2.1 Visual and Manual Modalities 336</p> <p>28.3.2.2 Auditory and Vocal Modalities 337</p> <p>28.3.2.3 Haptic Modality 338</p> <p>28.3.3 Wearable Devices 338</p> <p>28.4 Vehicle Automation 339</p> <p>28.4.1 Driver Support Features 339</p> <p>28.4.2 Automated Driving Features 341</p> <p>28.5 Driver Monitoring Systems 342</p> <p>28.6 Conclusion 343</p> <p>References 343</p> <p><b>29 Wearable Computing Systems: State-of-the-Art and Research Challenges 349<br /> </b><i>Giancarlo Fortino and Raffaele Gravina</i></p> <p>29.1 Introduction 349</p> <p>29.2 Wearable Devices 350</p> <p>29.2.1 A History of Wearables 350</p> <p>29.2.2 Sensor Types 351</p> <p>29.2.2.1 Physiological Sensors 352</p> <p>29.2.2.2 Inertial Sensors 352</p> <p>29.2.2.3 Visual Sensors 352</p> <p>29.2.2.4 Audio Sensors 355</p> <p>29.2.2.5 Other Sensors 355</p> <p>29.3 Body Sensor Networks-based Wearable Computing Systems 355</p> <p>29.3.1 Body Sensor Networks 355</p> <p>29.3.2 The SPINE Body-of-Knowledge 357</p> <p>29.3.2.1 The SPINE Framework 357</p> <p>29.3.2.2 The BodyCloud Framework 359</p> <p>29.4 Applications of Wearable Devices and BSNs 360</p> <p>29.4.1 Healthcare 360</p> <p>29.4.1.1 Cardiovascular Disease 362</p> <p>29.4.1.2 Parkinson’s Disease 362</p> <p>29.4.1.3 Respiratory Disease 362</p> <p>29.4.1.4 Diabetes 363</p> <p>29.4.1.5 Rehabilitation 363</p> <p>29.4.2 Fitness 363</p> <p>29.4.2.1 Diet Monitoring 363</p> <p>29.4.2.2 Activity/Fitness Tracker 363</p> <p>29.4.3 Sports 364</p> <p>29.4.4 Entertainment 364</p> <p>29.5 Challenges and Prospects 364</p> <p>29.5.1 Materials and Wearability 364</p> <p>29.5.2 Power Supply 365</p> <p>29.5.3 Security and Privacy 365</p> <p>29.5.4 Communication 365</p> <p>29.5.5 Embedded Computing, Development Methodologies, and Edge AI 365</p> <p>29.6 Conclusions 365</p> <p>Acknowledgment 366</p> <p>References 366</p> <p><b>30 Multisensor Wearable Device for Monitoring Vital Signs and Physical Activity 373<br /> </b><i>Joshua Di Tocco, Luigi Raiano, Daniela lo Presti, Carlo Massaroni, Domenico Formica, and Emiliano Schena</i></p> <p>30.1 Introduction 373</p> <p>30.2 Background 373</p> <p>30.2.1 Context 373</p> <p>30.2.2 Basic Definitions 374</p> <p>30.3 Related Work 375</p> <p>30.4 Case Study: Multisensor Wearable Device for Monitoring RR and Physical Activity 376</p> <p>30.4.1 Wearable Device Description 376</p> <p>30.4.1.1 Module for the Estimation of RR 377</p> <p>30.4.1.2 Module for the Estimation of Physical Activity 377</p> <p>30.4.2 Experimental Setup and Protocol 378</p> <p>30.4.2.1 Experimental Setup 378</p> <p>30.4.2.2 Experimental Protocol 378</p> <p>30.4.3 Data Analysis 378</p> <p>30.4.4 Results 378</p> <p>30.5 Conclusions 379</p> <p>30.6 Future Research Challenges 380</p> <p>References 380</p> <p><b>31 Integration of Machine Learning with Wearable Technologies 383<br /> </b><i>Darius Nahavandi, Roohallah Alizadehsani, and Abbas Khosravi</i></p> <p>31.1 Introduction 383</p> <p>31.2 Background 384</p> <p>31.2.1 History of Wearables 384</p> <p>31.2.2 Supervised Learning 384</p> <p>31.2.3 Unsupervised Learning 386</p> <p>31.2.4 Deep Learning 386</p> <p>31.2.5 Deep Deterministic Policy Gradient 387</p> <p>31.2.6 Cloud Computing 388</p> <p>31.2.7 Edge Computing 388</p> <p>31.3 State of the Art 389</p> <p>31.4 Future Research Challenges 392</p> <p>References 393</p> <p><b>32 Gesture-Based Computing 397<br /> </b><i>Gennaro Costagliola, Mattia De Rosa, and Vittorio Fuccella</i></p> <p>32.1 Introduction 397</p> <p>32.2 Background 398</p> <p>32.2.1 History of the Development of Gesture-Based Computing 398</p> <p>32.2.2 Basic Definitions 399</p> <p>32.3 State of the Art 399</p> <p>32.4 Future Research Challenges 402</p> <p>32.4.1 Current Research Issues 402</p> <p>32.4.2 Future Research Directions Dealing with the Current Issues 403</p> <p>Acknowledgment 403</p> <p>References 403</p> <p><b>33 EEG-based Affective Computing 409<br /> </b><i>Xueliang Quan and Dongrui Wu</i></p> <p>33.1 Introduction 409</p> <p>33.2 Background 409</p> <p>33.2.1 Brief History 409</p> <p>33.2.2 Emotion Theory 410</p> <p>33.2.3 Emotion Representation 410</p> <p>33.2.4 Eeg 410</p> <p>33.2.5 EEG-Based Emotion Recognition 411</p> <p>33.3 State-of-the-Art 411</p> <p>33.3.1 Public Datasets 411</p> <p>33.3.2 EEG Feature Extraction 411</p> <p>33.3.3 Feature Fusion 412</p> <p>33.3.4 Affective Computing Algorithms 413</p> <p>33.3.4.1 Transfer Learning 413</p> <p>33.3.4.2 Active Learning 413</p> <p>33.3.4.3 Deep Learning 413</p> <p>33.4 Challenges and Future Directions 414</p> <p>Acknowledgment 415</p> <p>References 415</p> <p><b>34 Security of Human Machine Systems 419<br /> </b><i>Francesco Flammini, Emanuele Bellini, Maria Stella de Biase, and Stefano Marrone</i></p> <p>34.1 Introduction 419</p> <p>34.2 Background 420</p> <p>34.2.1 An Historical Retrospective 420</p> <p>34.2.2 Foundations of Security Theory 421</p> <p>34.2.3 A Reference Model 421</p> <p>34.3 State of the Art 422</p> <p>34.3.1 Survey Methodology 422</p> <p>34.3.2 Research Trends 425</p> <p>34.4 Conclusions and Future Research 426</p> <p>References 428</p> <p><b>35 Integrating Innovation: The Role of Standards in Promoting Responsible Development of Human–Machine Systems 431<br /> </b><i>Zach McKinney, Martijn de Neeling, Luigi Bianchi, and Ricardo Chavarriaga</i></p> <p>35.1 Introduction to Standards in Human–Machine Systems 431</p> <p>35.1.1 What Are Standards? 431</p> <p>35.1.2 Standards in Context: Technology Governance, Best Practice, and Soft Law 432</p> <p>35.1.3 The Need for Standards in HMS 433</p> <p>35.1.4 Benefits of Standards 433</p> <p>35.1.5 What Makes an Effective Standard? 434</p> <p>35.2 The HMS Standards Landscape 435</p> <p>35.2.1 Standards in Neuroscience and Neurotechnology for Brain–Machine Interfaces 435</p> <p>35.2.2 IEEE P2731 – Unified Terminology for BCI 435</p> <p>35.2.2.1 The BCI Glossary 439</p> <p>35.2.2.2 The BCI Functional Model 439</p> <p>35.2.2.3 BCI Data Storage 439</p> <p>35.2.3 IEEE P2794 – Reporting Standard for in vivo Neural Interface Research (RSNIR) 441</p> <p>35.3 Standards Development Process 443</p> <p>35.3.1 Who Can Participate in Standards Development? 443</p> <p>35.3.2 Why Should I Participate in Standards Development? 444</p> <p>35.3.3 How Can I get Involved in Standards Development? 444</p> <p>35.4 Strategic Considerations and Discussion 444</p> <p>35.4.1 Challenges to Development and Barriers to Adoption of Standards 444</p> <p>35.4.2 Strategies to Promote Standards Development and Adoption 445</p> <p>35.4.3 Final Perspective: On Innovation 445</p> <p>Acknowledgements 446</p> <p>References 446</p> <p><b>36 Situation Awareness in Human-Machine Systems 451<br /> </b><i>Giuseppe D’Aniello and Matteo Gaeta</i></p> <p>36.1 Introduction 451</p> <p>36.2 Background 452</p> <p>36.3 State-of-the-Art 453</p> <p>36.3.1 Situation Identification Techniques in HMS 454</p> <p>36.3.2 Situation Evolution in HMS 455</p> <p>36.3.3 Situation-Aware Human Machine-Systems 455</p> <p>36.4 Discussion and Research Challenges 456</p> <p>36.5 Conclusion 458</p> <p>References 458</p> <p><b>37 Modeling, Analyzing, and Fostering the Adoption of New Technologies: The Case of Electric Vehicles 463<br /> </b><i>Valentina Breschi, Chiara Ravazzi, Silvia Strada, Fabrizio Dabbene, and Mara Tanelli</i></p> <p>37.1 Introduction 463</p> <p>37.2 Background 464</p> <p>37.2.1 An Agent-based Model for EV Transition 464</p> <p>37.2.2 Calibration Based on Real Mobility Patterns 466</p> <p>37.3 Fostering the EV Transition via Control over Networks 468</p> <p>37.3.1 Related Work: A Perspective Analysis 468</p> <p>37.3.2 A New Model for EV Transition with Incentive Policies 469</p> <p>37.3.2.1 Modeling Time-varying Thresholds 469</p> <p>37.3.2.2 Calibration of the Model 470</p> <p>37.4 Boosting EV Adoption with Feedback 470</p> <p>37.4.1 Formulation of the Optimal Control Problem 470</p> <p>37.4.2 Derivation of the Optimal Policies 471</p> <p>37.4.3 A Receding Horizon Strategy to Boost EV Adoption 472</p> <p>37.5 Experimental Results 473</p> <p>37.6 Conclusions 476</p> <p>37.7 Future Research Challenges 477</p> <p>Acknowlegments 477</p> <p>References 477</p> <p>Index 479</p>
<p><b>Giancarlo Fortino, PhD,</b> is a Full Professor of Computer Engineering, Chair of the ICT PhD School, and Rector’s Delegate for International Relations with the Department of Informatics, Modeling, Electronics, and Systems at University of Calabria, Italy. <p><b>David Kaber, PhD,</b> is the Department Chair and Dean’s Leadership Professor with the Department of Industrial & Systems Engineering at the University of Florida. <p><b>Andreas Nürnberger, PhD,</b> is a Full Professor for Data and Knowledge Engineering in the Faculty of Computer Science at Otto-von-Guericke-Universität Magdeburg, Germany. <p><b>David Mendonça, PhD,</b> is a Senior Principal Decision Scientist at Advanced Software Innovation.
<p><b>Insightful and cutting-edge discussions of recent developments in human-machine systems</b> <p>In <i>Handbook of Human-Machine Systems</i>, a team of distinguished researchers delivers a comprehensive exploration of human-machine systems (HMS) research and development from a variety of illuminating perspectives. The book offers a big picture look at state-of-the-art research and technology in the area of HMS. Contributing authors cover Brain-Machine Interfaces and Systems, including assistive technologies like devices used to improve locomotion. They also discuss advances in the scientific and engineering foundations of Collaborative Intelligent Systems and Applications. <p>Companion technology, which combines trans-disciplinary research in fields like computer science, AI, and cognitive science, is explored alongside the applications of human cognition in intelligent and artificially intelligent system designs, human factors engineering, and various aspects of interactive and wearable computers and systems. The book also includes: <ul><li>A thorough introduction to human-machine systems via the use of emblematic use cases, as well as discussions of potential future research challenges</li> <li>Comprehensive explorations of hybrid technologies, which focus on transversal aspects of human-machine systems</li> <li>Practical discussions of human-machine cooperation principles and methods for the design and evaluation of a brain-computer interface</li></ul> <p>Perfect for academic and technical researchers with an interest in HMS, <i>Handbook of Human-Machine Systems</i> will also earn a place in the libraries of technical professionals practicing in areas including computer science, artificial intelligence, cognitive science, engineering, psychology, and neurobiology.

Diese Produkte könnten Sie auch interessieren:

Circuitos lógicos digitales 4ed
Circuitos lógicos digitales 4ed
von: Javier Vázquez del Real
EPUB ebook
28,99 €
Open RAN Explained
Open RAN Explained
von: Jyrki T. J. Penttinen, Michele Zarri, Dongwook Kim
PDF ebook
102,99 €
Open RAN Explained
Open RAN Explained
von: Jyrki T. J. Penttinen, Michele Zarri, Dongwook Kim
EPUB ebook
102,99 €