• Big Data
    Interagency Working Group
    (BD IWG)

    The Big Data Interagency Working Group (BD IWG) works to facilitate and further the goals of the White House Big Data R&D Initiative.

    BigData
  • Cyber Physical Systems Interagency Working Group (CPS IWG)

    The CPS IWG is to coordinate programs, budgets, and policy recommendations for Cyber Physical Systems (CPS) research and development (R&D).

    CPS
  • Cyber Security and Information Assurance Interagency Working Group (CSIA IWG)

    Cyber Security and Information Assurance (CSIA) Interagency Working Group coordinates the activities of the CSIA Program Component Area.

    CSIA
  • Health IT R&D
    Interagency Working Group

    The Health Information Technology Research and Development Interagency Working Group coordinates programs, budgets and policy recommendations for Health IT R&D.

    healthitrd
  • Human Computer Interaction & Information Management Interagency Working Group (HCI&IM IWG)

    HCI&IM focuses on information interaction, integration, and management research to develop and measure the performance of new technologies.

    hciim
  • High Confidence Software & Systems Interagency Working Group (HCSS IWG)

    HCSS R&D supports development of scientific foundations and enabling software and hardware technologies for the engineering, verification and validation, assurance, and certification of complex, networked, distributed computing systems and cyber-physical systems (CPS).

    hcss
  • High End Computing Interagency Working Group (HEC IWG)

    The HEC IWG coordinates the activities of the High End Computing (HEC) Infrastructure and Applications (I&A) and HEC Research and Development (R&D) Program Component Areas (PCAs).

    hec
  • Large Scale Networking Interagency Working Group
    (LSN IWG)

    LSN members coordinate Federal agency networking R&D in leading-edge networking technologies, services, and enhanced performance.

    lsn
  • Software Productivity, Sustainability, and Quality Interagency Working Group (SPSQ IWG)

    The purpose of the SPSQ IWG is to coordinate the R&D efforts across agencies that transform the frontiers of software science and engineering and to identify R&D areas in need of development that span the science and the technology of software creation and sustainment.

    sdp
  • Video and Image Analytics
    Interagency Working Group (VIA IWG)

    Formed to ensure and maximize successful coordination and collaboration across the Federal government in the important and growing area of video and image analytics

    VIA CG
  • Wireless Spectrum Research and Development Interagency Working Group (WSRD IWG)

    The Wireless Spectrum R&D (WSRD) Interagency Working Group (IWG) has been formed to coordinate spectrum-related research and development activities across the Federal government.

    WSRD

SmartCities CaseExample Bannan/Transcript

From NITRDGROUPS
< SmartCities CaseExample Bannan
Jump to: navigation, search

Transcript of the presentation by Dr. Brenda Bannan

A Smart City Case Example

“So first let me introduce Dr. Brenda Bannan. She is an assistant professor in the division of learning technologies in the College of Education and Human Development at George Mason University in Fairfax, Virginia. Dr. Bannan's research interest involve the articulation of methods and processes in educational design research in the learning technologies field. Her work investigates the intersection of learning theory, emerging technologies, user experience, design processes, design research, and the iterature of development of innovative learning technologies, systems, and solutions. She's implemented one of the first doctoral programs in learning technology design research at George Mason University and she received her PH.D. From Penn State. So Brenda.” Dr. Keith Marzullo

Thank you very much. Thank you very much for the invitation to speak to you today. I would like to talk at a level of intellectual experiment that a team of 40 professionals and volunteers from this region participated in, thanks to the NIST Global City Teams Challenge. And I'd like to speak to it from a research perspective, but also from a design perspective in how we iterated towards a research agenda that we are now going to follow from this initial experience.

I might also point out that this team of 40 professionals and volunteers had never designed in The Internet of Things' cyber, physical systems space. So we perhaps represent some of the audience that you may be trying to reach. And in the conclusion of the presentation, I'd like to give you some of the insights that we arrived at in our process with directions of going forward.

So this morning, [INAUDIBLE], who is responsible for allowing me to speak to you and providing the invitation to your subcommittee, reminded me that when we did our Smart City Challenge Team, we wanted to bring in Fairfax Fire and Rescue and an actual ambulance into the National Building Museum where the event was held. And [INAUDIBLE] and I worked for weeks to try to coordinate bringing an ambulance from Fairfax Fire and Rescue offline.

We had the ambulance, we had the simulated patient, we had the staff, except the door was not big enough to allow the ambulance through. It's one thing that we overlooked. So trying to problem solve that, I think is an example, in a way, of design constraints that help to dictate what you are able to do in what I'm going to call The Smart City's Learning Design Research Experience, which is what this experience represents to me. So I hope there is some useful insights as we go forward.

As I stated, we were a team of organizations, including Inova Fairfax Hospital, Fairfax Fire and Rescue, as well as many other startup companies, academics from all over the region, as well as community volunteers on this team. And I want to tell you a little bit about our journey. As I stated, I want to focus on not just the technology system and not just the research that is possible from this experience, but the learning and design process that occurred, that actually helped to shape that research agenda.

I believe that this is very important for Smart City initiatives. Getting together very, very smart, intelligent, caring people about a community is crucial. It's about the people as well as the technology. So as I surveyed and I attended the Smart America Challenge as well the NIST Global City Challenge, it struck me that there were a few cases of learning and performance related research. There are many examples in other areas, but there were few examples in that area.

That's the area that I work in in learning technologies design and several years ago, I put together a framework. As I thought about this challenge and unpacked what we did, this framework, which I call The Integrated Learning Design model, which is kind of an R&D process with some learning theory embedded in it, is perhaps maybe applicable to this initiative as well. The human-centered design process. The experience design process starting with humans and not technology is a core focus of my work. Community involvement and through a participatory design experience is also a core focus.

So it is my vision to expand the educational design process tool set, so that you might include others from social sciences in addition to computer science and advanced networking professionals in this movement. And I'm an example of that. And to support new partnerships and solutions. So in thinking about this talk, I read a book called Beyond Smart Cities and Learning by Tim Campbell. And his work really resonated with me in that he spoke to going beyond smart cities, we need to think about the learning processes that are involved. Not just the learning processes that come from the research and prototyping, but also learning processes of cities, and communities, and citizens involved in this work. So therefore, as we think about Smart Cities, perhaps the process of learning is just as important as the product.

He also speaks to-- Smart Cities, to really achieve them, we need to go under the hood. Really, what happened there? It was such a complex experience designing at this level that we really need to understand what the mechanisms were for city learning, and community learning, and citizen learning in this context. We believe that the design process may hold the key for this, that this is an important process. It's just as important as the software and hardware that we are implementing. It really is designed that helps us to use technology. It really is an attunement to the design process that helps to make innovative products happen and usable.

The Europeans are somewhat ahead of us in this thinking. They are already basically forming conferences and groups related to Smart City learning. In fact, I've been invited to give a similar talk at a conference in the European Conference of Technology Enhanced Learning in Toledo, Spain next month. And there is a group there that is focused on Smart City learning from a very humanist perspective. And one of the researchers in that space spoke of that a grand challenge is an integration a top down vision, a policy vision, but also a bottom up vision for what he called a person centered in place design approach. And I believe that we tried to strive towards this person centered approach in the example I'm going to show you.

This was our team challenge name. We called it the Ecosystem for Smart Medical Team Training. And we tried to think about context. Context was prevalent in all of our work, in all of our thinking. And Tim Campbell speaks to this. He basically says, to understand the contextual factors can make or break a good idea. And we tried to implement a systematic process to help us understand those contextual processes and those contextual factors that would make or break our Smart City, Internet of Things design.

This is a big picture and I'm going to unpack this for you. But we tried to use The Internet of Things' technologies in the spaces of medical team training. That, in and of itself, is an extremely complex space to intervene in. And I will be unpacking this, but we wanted to enhance not only team based training, but eventually impact patient care. That is the ultimate goal. To do this-- and I'm not going to go through this, this the integrative design learning framework I put together originally in 2003 for web based learning and e-learning initiatives.

But as I look at it, the broad phases of this systematic process of unpacking context through what I call an informed exploration phase in multiple ways-- pragmatically, from research-- and bringing all of that together to funnel into the direction or learning target you are iterating and prototyping toward in an enactment phase, to then evaluate it, your innovation or intervention, in a local environment. And to eventually scale out. There are some guiding questions here and some applicable research methods ideas.

And we began to think about this experience through this systematic lens of the broad phases, which I'm going to use as an organizational framework going forward. So as I stated, our informed exploration was the initial phase. We really needed to understand this context. Please understand that this research area was brand new to all of us, the technology system was brand new to all of us, we had no funding, we had no corporate sponsorship in this initiative. This was all in-kind contributions and we spent our time, my graduate students and I, really diving into the context to try to understand what the boots on the ground view was of the problems that were inherent in that medical and emergency response simulation.

So we began to frame a problem. And we had help from The Advanced Surgical Education and Technology Center, which is a brand new, surgical simulation lab in Inova Fairfax Hospital, which opened last summer. They brought us in, as learning technologists, to think about how they might use state of the art training. And we were brought in to observe some of their simulation context. And it was an extremely eye opening experience to do that. And so out of our interaction with them, we began to look into the literature and really try to understand what were we dealing with here.

And we all know that medical, and surgical, and emergency response teams have an amazing amount of information to deal with. They need to be cognizant of time, they need to be aware of elements that are around them at all times, they need to coordinate as a team, they need to think about their actions, others actions, and stress. And any and all of these can impact their performance.

So as we looked at simulation, and the literature, and the pragmatic observations that we did, certainly simulation is a viable intervention. And it's a way and has been linked to improved patient care and improved training. But the dynamic interaction, which we saw, was really key to understanding team based awareness and individual situation awareness.

We also saw in the literature that the behavioral as well as cognitive factors in a team based settings are notoriously hard to measure. They are typically done with multiple observers, and they are very time intensive, and resource intensive. So we decided to complicate our lives even further. And we decided to include not just one team, but several teams. So we wanted to try to instrument, across the continuum of care, from an accident in the field, the emergency response team responding, to en route to the hospital, the ambulance team that was with the patient, to the emergency department.

And originally, we wrote the scenario to go all the way into the surgical setting. And this was a realistic, real time simulation. Multi team. And this is what we envisioned. Deploying a real time, tracking system across this continuum of care that would record their activity and experience in this high fidelity, multi team simulation. Perhaps it was my naivety that got us into this complexity, but we were able to carry off a technology proof of concept that then will actually greatly inform our next steps in research.

And we wanted to gather input. We didn't know if this would work. We had no idea. The technology system had not been tried outside of traditional learning management systems and digital input. And these teams had never worked together, had never run an actual multi team simulation before. So we were in new territory. Luckily, we had an amazing community. We had the support of Fairfax Fire Chief, Richard Bowers, we had two of his medical directors. The Chief of Surgery was our initial champion, Dr. John Moynihan. And this would not have happened without their support who also directed us to his Emergency Department Director, Dr. Maggie Griffin, who wrote the actual multi-team simulation scenario.

And we had probably 10 MDs. We had medical residents, we had fellows, we had OR techs. We had their simulation experts. They have a state of the art video simulation system in the context. And we had a lovely government agency initiated that had developed the software that allowed this to happen. And we had analytics companies and hardware, software companies that donated their time, and product, and expertise.

But most importantly, we also had volunteers from the community, that were unsolicited, that participated in this experience. Not only academics, but actually instructional designers, corporate professionals, high school student, graduate students, undergraduate students, which really made this unique, I believe. So we took what we call a deep dive in our informed exploration. I teach my students a user experience design process and I say the most important thing is to get into the context, get next to that person, and really use empathy to understand what they are experiencing. That makes you a better designer. That makes you a better design researcher, I believe, as well.

So we dove into the context. We observed more team based, surgical simulations. We observed simulations at The Fire and Rescue Academy and The Fire and Rescue Department. We spoke to ambulances, we rose on the ambulance. We tried to unpack their experience across these teams. And part of our process was to try to identify what's called pain points in the journey. In talking to them, what really might we understand about places where we could intervene, that made sense to intervene, in our intervention that we then could conduct research and iterative cycles on.

So it really makes a big difference in the experience to think about contextual design. I believe that Smart City's design adds a whole other layer that learning technology design does not have. And that's sort of the intersection between people, information, and technology, as well as spaces, or settings, and contexts. And the many to many interactions that can happen amongst this was part of our analysis to try to break down where we could intervene in the most logical way.

So we conducted multiple interviews, observations, and focus groups. We didn't just do this at the beginning, we kept doing it. So we did it in iterative fashion as we formed the idea, as we shaped our direction. And we identified pain points. One, what I spoke about before, the complexity of a live action, multi team simulation. What is important in there? One of the things we heard directly from them was the patient hand off between the EMS team and The Emergency Department as well as from The Emergency Department to the surgical theater. It is a place where mistakes occur. And it's a place where the literature in that realm is really focused on.

Another place is inter-professional team based. These teams are dynamic. They change. Members change. And they have to work under extremely amazing conditions, both in time, and pressure, and stress. And the team is never the same, typically. So there's a dynamic element to that. And what we noticed in observing the debriefing session of the simulations that happened was that it was a very cursory level of information that was spoken about in the debrief. And the video data collection was never really referred to because of the time limitations of the busy professional that conduct these simulations at 6:30 in the morning before their actual shift.

And so they don't have enough time to repeat, sometimes, the simulation, which best practices dictate. And they don't have enough time to unpack everything. So you're relying on the facilitators memory and those in the rooms' memory to pull out the learning points. So we decided to go to the next stage with all of this information funneled into what I call it the enactment phase. It takes your humble theory of what you think might work with some support, but yet it's never been tried before with some support from the literature and thinking about how can we design something, enact something to intervene here.

We articulated our target as tracking individual and team based behavior. Now design constraints that we had were that we only had one piece of technology in our Internet of Things, which were Bluetooth proximity beacons that were donated to us. Originally, we had many, many ideas, but the hardware did not manifest. But this hardware did from Radius Networks in Georgetown. So we decided to hone in our concept related to these proximity beacons, which I'll speak about in a moment, as our data collection method.

We wanted to display real time information about the simulation and, in particularly, in the debrief where research says the most learning occurs. As I looked into the literature, and there was an iterative process of the pragmatic, practical context as well as the literature that we were going through, I found this study in Switzerland that spoke to firefighters actually recording the proximity data as they entered into a simulated fire context. And the wearable computing was able to give them information about their dynamics, about their team coordination, and a graphical representation, across time, about who was in close proximity to who that allowed the instructors to unpack what that meant.

So this is kind of a model of what we were trying to do. We wanted to ultimately-- but not in this technology proof of concept. But we were working toward, eventually, other iterations of impacting situation awareness and collaborative reflection. This was not an evaluative. We made it very clear that they were participating in this for their own learning and for their own reflection.

This is a key thing in user experience design. And we were building to think. And it's an important point here that we held a "hack-a-thon." We ended up forming, re-framing, and framing our new direction. And we held a community based hack-a-thon where we recruited from local meet up groups who were involved, volunteers who were involved in Internet of Things, who were involved in design thinking, who were involved in networking. And we got all sectors of the community. Here you see a Harvard trained, analytics CEO along with a high school student and a learning technology professional from The Red Cross, in the black t-shirt. And in the striped shirt is a representative from The Government Agency who was helping us understand the software protocol and implement that.

That's just a slice of the levels of community that participated in this. And they came to do something bigger than themselves, to help their community, to use and learn different ways of designing and be exposed to it. So I spent some time at Stanford University, in the d.school. And David Kelley, who is from IDO, influenced my thinking greatly. And he says it very clearly here, that design, for whatever reason, is nonthreatening. It really levels the playing field. Everyone can speak to their experience. And our community members spoke to their experience in hospital wait times, of friends who were in an ambulance going to the wrong hospitals, of things that went wrong in their experience. And we leveraged that to think about our own design and help us in research.

So we began to hone in, focus, and iterate. And the key finding or element in the design research process for me was a serendipitous comment by the trauma surgeon. She said, look, I know we only have these proximity beacons and it limits what we can do, but if you can just tell me who's in the room, meaning the trauma bay, of who's actually supposed to be in the room, and the location of the patient at all times, that would good enough for me right now. So that's what we strove to do. And it actually manifested from there.

So I'm going to give you a taste of the experience through a video. It's eight minutes and I'm going to cut it off short. It gives you a taste of that day, beginning at 5:30 AM with the actual multitasking simulation that we conducted, and a little bit the feedback from the participants that helped inform our design and will iteratively help inform our design research going forward.

[VIDEO PLAYBACK]

-A professor in the division of learning technologies in the College of Education and Human Development at George Mason University. This is Dr. Shane Gallagher from The Advanced Distributed Learning Co-Lab. And we are about to embark on an interesting technology proof of concept adventure.

-Engine 429, medic 429. Channel 46, MVA [INAUDIBLE] for an accident with injury. 2600 West Ox Road. Engine 429, medic 429. Channel 46. MVA for an accident with injury. 4600 West Ox Road. Single vehicle accident with one patient not acting appropriately.

[AMBULANCE SIREN]

-429 engine.

-Engine 429.

-Engine 429.

-Medic 429 is en route.

-Medic 429.

-Hello. Sir? Sir, are you OK? Are you OK, sir? We'll need a backboard, Tyler, stretcher. All right? Come and take a V spine real quick.

-So we have a system that is actually being put in place today, a multi team simulation, thanks to Dr. Weir, and The Fairfax Fire and Rescue Academy, and The Fire and Rescue Department, as well as The Fairfax Hospital. And we are about to test a technology concept of trying to improve simulation training for The Fire and Rescue folks as well as The Fairfax Hospital surgical and medical teams.

So we have an interesting system that involves a beacon from one of our sponsors, Radius Networks. It also involves cellphone technology where the beacon emits a low level, radio frequency every three seconds that is picked up by the cellphone listener. And the cellphone then transmits information via a xAPI protocol, or experience API protocol, to the cloud, which is represented here by a little microprocessor from another one of our sponsors, Arnouse Digital Devices. It's a BioDigital PC. This is an actual X86 PC, a credit card sized PC, that fits into a reader.

And from this the signal goes to a specialized database that is created by Yet Analytics in Baltimore, Maryland. And that database feeds, basically, a picture back to the participants, in a graphical form, to allow them to see their fine grain actions. In this case, their proximity to the simulated patient. So we are in process of instrumenting the patient. And Shane, would you like to elaborate?

-Yes. So what we will do is we're going to some double sided tape and we're going to put this beacon-- let's check and see. There's the green. So it is green and it's on at this point. And we're going to stick that somewhere inside here, so that it's secure, but will travel with this mannequin. And I think we'll try that right there and see if that works.

Now this phone will, through Bluetooth, pick up beacon signals. We have an app that's been developed that then, as Brenda said, will transmit those signals, via the internet, to a cloud-based [INAUDIBLE] record store, which collects this xAPI data. And it's then analyzed. And we're also then accessing that at the hospital through a leader board so they can see what's going on, where the mannequin is in space, and where the EMTs are in space proximity to this.

-So throughout this multi-team simulation, we will be able to track the patient through its journey from the crash site, en route, in the ambulance, to the hospital. As well as, hopefully, if everything goes well, track the professionals proximity to the patient and some of their movement.

-Yes. And there will also be cell phones at strategic locations within the site where the patient was picked up, and in the ambulance, and also at two locations within the ER and the trauma bay. And this cellphone here will pick up proximity from the medical technicians. However, this beacon actually will transmit towards those other cellphones, so that we can track both types of activities. And that's essentially what's going to happen.

-Good morning. 45 year old male, [INAUDIBLE]. Driver restrained. Awkward LOC initially and hypertensive. Initial blood pressure was 90 over 61. Pulse rate is 110. He's got two lesions from [INAUDIBLE]. His last blood pressure was 118 over 68. Pulse rate of 118. No medical history, no meds, no allergies. Chief complaint, neck pain in his upper right quadrant.

-On your count.

-Ready?

-One, two, three.

-This data is actually collected in real time to then be shown back to the teams in the debriefing session, which is one of the most important points of learning in this process. So that debriefing session is crucial, so that they then are able to see their fine grain actions in the situation as it happened. They'll be able to see it immediately. We've never been able to do that before. We don't have good ways of measuring or evaluating team based behavior, so this is an experiment to see and to give the EMTs as well as the medical staff at Inova Fairfax a different type of a window into their own behavior with the hope that it could enhance their situation awareness, their team coordination, and their functioning as a team in high stakes situations.

-I know the residents have to go. I'd like to get their feedback before they need to go just to get a sense-- do you have a sense of what this might do? And your reaction to it as well as things-- to my colleague Jane over here, if you could imagine anything in your mind that we could track to help your team process be better, what would it be?

-I think it's great. Like Daryl said, we've all worked together in the trauma bay many times before, but it doesn't go as smooth as that sometimes. So I think this would really help because maybe we'll get more people in the bay. Like someone is dedicated to grabbing meds for us, if we need intubation meds, but they're trying to put in an IV or something. So I think this will really help with team dynamics. And as far as tracking within a foot of the patient in the bay, I don't know if that really will help so much because we're all going to be within a foot of touching the patients.

-But if we were able to track your proximity or your actions with particular objects in that arena, in that theatre, that would be a useful thing for you?

-Definitely.

-And that's what this adds. This adds a whole layer of that that we were unable to do before, in real time, on the technology side. Other thoughts from the residents or Dr. [INAUDIBLE]?

-Well, the things we talked about was being to put things where we could identify the ultra sound machine when the probe gets picked up. The manual cuff, when does somebody pick that up and go with it? So we all think we're doing these things in a timely manner and that kind of thing, but we really don't know. When does the blood bay door open from when the patient opened? When did you pick up the i-STAT machine to do the ABG? All those kinds of things that we just kind of do.

And we really think we're doing them-- of course we're doing them efficiently and appropriately every time, but if you can really track those things and know, then as far as being able to debrief on it and say, why did it take us 25 minutes to get to blood on this patient? Or whatever it is. Those kinds of things are things that, I think, we really don't know.

[END PLAYBACK]

I'm going to stop it there. That gives you a flavor of the day and that we were able to really influence and understand what their perspectives were in the context immediately afterwards. They actually went through their regular debriefing process first and then we were able to talk through suggestions that they might have for our next iteration of this. But let me give you a sense of the system itself.

The system was built with the experience API, which is a specification created by The Advanced Distributed Learning Co-Lab, which is under The Department of Defense. It's been used in military as well as some corporate contexts. And Dr. Shane Gallagher, who is here with me, it was crucial to instituting this specification in our work here. And it allows the technology systems and devices to talk to one another and record data in the form of English like statements that are human readable and machine readable. Something like, nurse placed blood pressured cuff in an actor, verb, object syntax that is then aggregated and sent to a cloud based learning record store, which then can be displayed in real time.

And this made things very much accessible for the learning technology folks to understand what this analytic system could do and to pre-defined those elements so that we could collect that data. This is a very complex information architecture representation. We weren't able to do everything that was on this originally and cut out some of the context. And we simplified to this representation, which just really shows you the mannequin instrumented with a small, little beacon as well as a cellphone listener. So the beacon was the sender and the cell phone was the listener.

And the proximity of the professionals, who each had beacons on, was read by the listener inside this SimMan patient. There were also cell phones placed at different points in the system in order to track the other professionals beacons. So a simple idea, but a very interesting one in process in that the community helped us write these statements. They helped us actually form some of the information architecture as well as the statements that were written, that went to the cloud. So those community members also went away with an enhanced knowledge of what this type of system could do, potentially.

We evaluated it. Some of our results-- we realized that we needed a little bit more contextual information. So at the very last minute, probably the night or two before, one of our designers and developers developed a checklist on board on an iPad, so that someone inside the ambulance could actually electronically check off the medical events that were taking place and the timing of those. And that was also fed, via xAPI statements, ahead of the patient arrival to the emergency department where they could see the status of the patient before that patient arrived. That was an innovation to them, which is interesting.

Here are some of those statements. And this was done through a commercial app as well as a Google form, basically. It was as easy as that. Here's some of the temporal data that came from that experience along with the logistics of the scenario. And importantly, the patient care events that happened, such as fluid given, which is a real question. In that scenario, it was important to know when the fluids were given. And some other of the core performance indicators, as far as the time from ambulance to door of The Emergency Department. So that was automatically collected and manually collected.

This is some of the data representation, preliminary, from the xAPI statements. The orange circles represent the cellphone listeners. And they also have email addresses attached to them. The pink circle is the verb, which was detected, which proximity detections were made, and the volume of those detections. And the green beacons represent individuals who were instrumented with those beacons.

What you can't see in this is, is it's dynamic. You can actually pull this apart and drill down to an individual's experience. Here is a slice of the data in looking at the hand off. You can see context, particularly the orange to the green of the hand off between the EMS provider and the trauma bay. We were able to show the trauma surgeon exactly who was closest to the patient during that hand off process in a very simple way.

This just overlays. These all beacon detections. And the numbers overlay the events that were collected through the electronic checklist as well the patient's blood pressure level, which was crucial to this scenario, is laid in here. So it was some experimentation of representation here. You heard some of the participatory design feedback that we collected from the trauma surgeon and medical residents and their preference to really do this again, but track objects and movements of objects, which we can do through accelerometers and through many other different types of sensors and digital devices with the xAPI protocol.

And improving their efficiency, effectiveness and time base. And also improving their reflection. What would be most important to them to see based on these actions? Their suggestion is that we replicate this. And we would like to do that with additional behavioral tracking. The Fire and Rescue team was very concerned about being invisible, the invisible data collection. The manual checklist, even though it was electronic and set ahead of time, was too much overhead for them. They want that to be invisibly collected and automatically collected. And so we would strive towards that.

They mentioned not recording. They'd like verbal statements to give context. And not recording everything, that was very clearly laid out. But perhaps a key word type of a recording mechanism, so to only gets segments that are important to the context is an innovation falling out of this work. Biometric data. Stress and stress that happened on these teams is extremely important and impacts performance. So collecting galvanic skin response would be something we would do, potentially, next. And improve the visualization. That really is a key element in here to make it understandable and easily understandable by the participants who don't have a lot of time to unpack what went on. And my interest is to improve our system and process approach through design research in this realm.

So there were many, many outcomes that came out of this. Learning by the academics, learning by the community members. And many different levels of innovation, and technology, and thought towards what can be done. Multiple research directions in Smart Cities, and Internet of Things, in health care, in team science. And creating a new way to look at teams, potentially. And the interdisciplinary nature of this work cut across silos. For the first time in 20 years at Mason, I'm working with a human factors professors, I'm working with a bio engineer who's designing the components for this. It was amazing to me. As well as community design participants. And so there were interdisciplinary interactions all over and grounded in practice.

And this is crucial to a design research process. Being able to iterate between the boots on the ground perspective, and research, and back and forth really happened in this context. Being able to frame and re-frame the problem, using design thinking, using broad level systems thinking, and being able to do a proof of concept that now makes it-- our team, who have worked together before, who know each other's strength, who know the system, is now able to iterate in a more meaningful way.

And we've shown that xAPI, which was never used in a situation this complex, can actually collect data automatically in the field. We had some bumps, but we got through it, and we got data. And incorporate other digital devices. Imagine if we could collect interoperably, which the spec allows, from medical devices. The continuous data streams that could happen in that context. And design. I've spoken a lot about this today and hopefully you see some of it in action through the research here, but I have a new direction in thinking about design at the level of complexity of organizations, and teams, and actually diving deep into very complex scenarios.

And the analysis, iterative visualization, and a systems perspective. And learning from failures as well as successes. We need to allow that to happen. So I'm not going to go through this, but this is sort of some initial thoughts related to some of my readings on Smart Cities and thinking about that integrated learning design framework and systematic approach towards Smart City learning design and development.

So in conclusion, Tim Campbell says it well. He says, we need to see city learning as a collective process, a collective learning process, at all levels, which starts with the discovery by individuals. I was very fortunate to have the individuals on this team from all walks of life and all levels of learning that added to this success. And the focus on Smart Cities should be on more than just the technology system. It should also be on learning, on some type of design research process, on the access and inclusion of involving a social scientist, like myself, in high level, advanced networking design, and using the informal and formal learning channels in a top down as well as a bottom up process.

The interdisciplinary inherently in this work is amazing. And some of the opportunities that I have had and that our team has had really speak to this. And the level of complexity of systematic approach and thinking thoughtfully about learning from case examples like this one, I hope is where you will help us to travel forward in this journey. Thank you very much for the opportunity to talk to you