[I/O]³: A Hand‐On Approach to 3D Modeling
University of Massachusetts-Amherst
[I/O]³ (Input/Output Cubed) is the next step in 3D modeling technology. Analogous to a drawing tablet, it aims to allow intuitive manipulation of virtual 3D objects on the computer, as well as display a volumetric image of those objects. User hand gestures control virtual object functions, like creation, translation, rotation, and scaling, as well as control functions, like saving a copy of the virtual model or undoing or redoing a change made on the model. This model will then be displayed on a volumetric display, allowing the user to visually examine their model as if it were a CNC machined or 3D printed before consuming any physical material. The volumetric display is a true, 3D representation of the virtual model utilizing persistence‐of‐vision, free of the characteristic distortions and limitations of common stereoscopic 3D displays. Combined with simple 3D modeling software, the hand‐gesture tracking and true‐3D volumetric display of [I/O]³ will provide users with a complete toolset to create, modify and save virtual 3D models intuitively, all while being able to view their creation as if it were a tangible object, without the time and material cost of physically building their model.
Augmented Reality Simulator
Wearable computing is quickly becoming the next step of embedded systems evolution. A revolution which began with smartphones has progressed to devices like Google Glass and the Samsung Galaxy Gear aimed to bring computing closer to everyday life. But existing wearable implementations target the consumer market and intend to serve as merely omnipresent gateways to the Internet of Things. As a solution, this team proposes an augmented reality platform which offers new multi-user collaborative applications for wearable computing.
To provide this functionality, sensor fusion of geospatial position and inertial measurements will allow the device to determine the user’s location and head orientation for applications such as driving directions or virtual tours. A team-designed power management system will be used to maximize battery life. Wireless connectivity and a user interface designed for wearable computing will also be critical for integration with other headsets. This device aims to minimize disruption of the user’s peripheral vision while also avoiding the eyestrain of constant visual refocusing. The device will eventually be tested both indoors and outdoors to ensure adequate visibility in a range of lighting conditions and user-friendliness for a non-technical audience.
Southern Illinois University Carbondale
As one of the poorest nations both economically and medically in the Western Hemisphere, Haiti emblemizes many of the challenges facing the provision of healthcare in the third world including a lack of literacy, unreliable power supplies, slow internet connections, harsh environmental conditions, and linguistic incompatibility that together inhibit the adoption of modern medical practices. The field of telemedicine has long promised technologies to mitigate or eliminate such concerns while offering the benefits of first world medical care in the third world, but it has consistently under delivered.
The proposed solution seeks to undo the fruitless trend in telemedicine by providing a suite of four devices that together connect a patient in the third world with a doctor in the first world. A ruggedized device in the third world will provide automated identification of patients, automated collection of patients’ vital signs, manual entry of patients’ symptoms, and presentation of a doctor’s diagnoses and treatment recommendations. A web application in the first world will provide doctors with access to all current and historical patient data as well as a means to enter diagnosis and treatment recommendations. Servers in both the third world and first world will manage communications between the devices, maintain databases of all patient data, and handle translation services. While the solution will be designed for general deployment in the third world, it will be benchmarked against the conditions present in Haiti.
University of Massachusetts-Amherst
Nowadays, people pay more attention to their privacy than ever before. When we deliver a document including some privacies or important information to a person via internet or delivery, we hope only the authenticated receiver can see the important content. Other people can just see the general information as well as when the document is lost or intercepted.
Therefore, the only method to implement this challenge is local information encryption of the document. We use the encrypted QR code to represent the local content and use private/public key for encryption/decryption. Considering the local—encrypted document can be both electronic form and paper form, in our project we can recover the local—encrypted document both through software and a reader device.
ETC – Automated Adjusting Suspension
Seattle Pacific University
AAS is a system that adjusts the suspension of a vehicle based on the condition of the road ahead of it. The goal is to create a vehicle that can intelligently adapt to the roads that it encounters so as to improve road isolation and handling for automobiles. The system will consist of sensors that return a profile of the road ahead of the vehicle. The suspension components would then quickly adjust for the impending irregularities in the terrain. The system would specifically consist of the following:
- Sensors that gather information such as: the profile of the terrain ahead, wheel orientation, suspension position, etc.
- Independent electronically-controlled suspension for each wheel.
By gathering data from the terrain in front of the vehicle, AAS will bring active suspension to a market dominated by passive systems. Existing adjusting systems do exist, but are by nature passive. We are creating a car that can see the road ahead of it.
The purpose of this project is to propose an advanced GPS navigation system that will eliminate the gap between current navigation interfaces and the real world driving experience, utilizing a live stream of the driver’s eye view by displaying an overlay of his route. This system will improve safety while driving with GPS navigation. The system will employ a camera as the source of the live stream that will be displayed on an LCD Display. The camera is connected directly to the DE5i (Intel Atom + Altera FPGA) Board which allows ultra-fast image processing to overlay the route of the driver. The route data is received from a mobile device via the EyeView app. Subsequently, the DE5i Board parses and interprets the data received from the mobile device and overlays the resulting route images (generated from the data) on the video frames and displays the result on the LCD screen. This advanced navigation system also includes an additional safety feature specifically for traffic light detection and recognition to alert the driver while approaching a traffic light. In essence, we hope our device will evolve the future of GPS navigation system technology.
University of California, San Diego
Nocturnal animal behavior is often a mystery to researchers due to the dangers of working in darkness and inherent human disturbance to the wildlife. This is particularly true of our collaborators at the California Wolf Center. Researchers there face the difficulty of attempting to study an animal most active at night. Also, their end goal is to release the wolves back into the wild and therefore any human interaction with the animals runs the risk of the wolves becoming emboldened in the presence humans, detrimental to the rehabilitation process. Our solution to this is FANGS: Functionally Autonomous Nocturnal Guidance System.
FANGS is a remote controlled vehicle with IR Cameras and LIDAR for obstacle awareness and subject tracking in low light conditions. FANGS requires a human driver for direction, but assists the driver, by intelligently avoiding obstacles and cueing the driver, as visibility is low in night conditions. Computer vision algorithms will allow the vehicle to navigate the terrain and track the wolves once they are in sight of the IR cameras. Once in the presence of the subjects the cameras will record data about the wolves and their behavior. Placing these on a mobile platform allows researchers to move around and locate the wolves, and record behavioral and physical data about the animals each night, giving unprecedented data to wildlife researchers.
Oregon State University
Dr. Wattson: Power Analyzer is an integrated system which will bring real time data collection, logging, and analysis of home energy usage to consumers in an easy and understandable way. Presently, few consumers have the tools to make informed decisions about their energy use. The goal of this project is to help people evaluate and reduce their current energy consumption habits through measurement based recommendations and social network interaction. The Dr. Wattson monitoring system consists of multiple outlet mounted monitoring units all of which communicate to a central base station. This base station will record and process the user’s energy use habits, presenting them with easy to understand graphs, comparisons, and suggestions to help them save energy and money. Integration with existing social networking sites will to allow users to share and compare their energy consumption, encouraging continued effort toward reducing their power use.
Fitness Self-Assessment Tool (FSAT)
Portland State University
Exercising is increasingly popular as 45 million American adults have gym memberships . Ironically, a recent study shows a 35 percent increase in workout-related injuries in recent years due to improper technique or inappropriate exercises . Personal trainers usually assess clients by closely observing them as they perform simple exercises prior to creating a training plan for the individual. However, the cost of personal trainers makes them unaffordable for many people. Moreover, these assessments can be subjective, with results varying depending upon the trainer.
We propose a Fitness Self-Assessment Tool (FSAT), capable of assessing a client’s physical ability, particularly identifying physical weaknesses, then recommending appropriate exercises. The FSAT will guide the user in performing the Overhead Squat and Single Leg Squat exercises, commonly used in assessing a person’s mobility. The FSAT will analyze captured video and extract key characteristics such as limb angles, alignment, and extension. Based upon these, the software will assess the user’s mobility and weaknesses and recommend specific workout exercises. The images will be retained in a database so that an individual can observe her progress over time and, as a training tool to help train personal trainers to perform assessments.
Home-Automation Integrated Logic (HAIL)
University of Michigan
The growth of information technology has afforded us a variety of means to get connected to our surroundings. However, interactions between people and the environment are far less frequent around the house than in public and/or commercial forums. For instance, we cannot take control of most family electronics unless we touch them directly by hand. Although devices usually have good knowledge of their own status, they have no idea how they affect things around them due to a lack of updated environmental and feedback data. An oven would never know if it is burning just the food or the whole house. Furthermore, the overall environmental data which most in-home facilities depend on is neither fully gathered nor properly handled by people or machines.
The goal of the Home-Automation Integrated Logic (HAIL) project is aimed to create an interactive connection between home owner(s) and the in-home environment. Via special-purpose sensors, cameras, and other implanted modules, the HAIL system would be able to picture the overall in-home environment and forward all data collected to a central control unit. The control unit would take proper control of devices connected to it with the embedded AI. It would also use a graphical interface to present the data to the owner(s) and communicate with him/her if necessary.
University of Rochester
Haptic Technology describes the use of touch to convey information to an active participant, often in collaboration with traditional forms of video and auditory display. Most haptics are represented by small and contained feedback flywheels or buzzers hidden within control mechanisms, relying on the imagination and reflexes of the controlling person to interpret vibrations as an additional signal.
Our goal is to use haptic feedback to convey the sensation of touch for users interacting with a virtual environment. By using commercially available camera capture software, user(s) in an otherwise unoccupied room will be reconstructed as digital representations in a virtual coordinate space. Using vibrating patches placed across the body, the user will be able to “feel” virtual objects that have been “placed” in a virtual space, receiving buzzer feedback when a hand or limb intersects with a computer generated boundary.
Medication errors are an all-too-common occurrence in medical services, especially long-term care. A large majority of these medication errors are due to improper administration and subsequent documentation. To reduce the number of these medication errors in places such as nursing homes and assisted living centers, we propose an embedded medication distribution system that will 1) keep track of inventory, 2) distribute medication in an efficient, consistent manner, and 3) monitor medical adherence by reminding residents to take their medication at the appropriate times. The system will consist of a secure central server for storing resident and medication information, one or more medication dispensers, and individual wristband/bodypacks for notifications and identification. When a resident needs medication, the server notifies the resident via his or her bodypack. The resident then proceeds to the dispenser to take the medication. By transferring responsibility of drug administration and documentation from humans to machine, this system can reduce medical issues arising from medication errors in long-term health care.
University of Pennsylvania
Television has always been a source of excitement and a form of entertainment. Yet, the experience has become a passive activity and much of the initial excitement that the television induced in the early years has been lost.
To make the television viewing experience more interactive and immersive, we present a set-top-box of the future capable of allowing a user to interact with a TV show of their choice. We will develop a system that is synchronized with sequences in a movie or a pre-recorded sports game and Integrate physical aspects such as lighting, haptic feedback and synthesized sound effects so that the entire room is activated when an appropriate context-aware video signal is received.
Another part of the project deals with allowing users to purchase anything within a TV show, creating an environment where the user selects a product of their choice rather than a broadcasting corporation influencing users.
Our solution will focus on the authoring and run-time system for such embedded digital content; the design and development of the overall set-top-box system architecture; and demonstration of the integrated system with popular TV shows. In this way we can merge the virtual and the real world while demonstrating the power of the platform.
University of Pittsburgh
The split-second decisions and oversights of friendly forces is the leading cause of the creation of isolated, missing, detained, or captured (IMDC) personnel in the United States military . Even after an isolation incident occurs, it can be some time before it is recognized that the individual or individuals in question are not accounted for, and for the issue to be reported. Our technology, the KoalaKollar, seeks to automatically and immediately notify a commanding officer when an isolation incident has occurred so that swift action may be taken to reduce the chance of requiring a costly personnel recovery operation. The system consists of a KoalaHub worn by a squad commander, the KoalaTransponders worn by each other squad member, and the KoalaTracker device used by the squad commander view the status and relative location of other squad members. If a squad member’s KoalaTranspoder ceases to respond, an isolation event has occurred and the commander is immediately informed, in addition to being presented with the last known location (relative to him/her) of the now IMDC individual.
University of Florida
Emergency situations are faced by cities almost every day. The fatalities and/or casualties arising from such situations highly depend on the response time of the Emergency Response Team (ERT). The response time of ERT hugely depends on the traffic conditions which seem to grow exponentially day by day. A well informed and regulated traffic will lead to unobstructed passage of the ERT, ensuring quicker response to emergency situations. Hence, finding a solution for ERT to respond to the crisis has become an utmost necessity we propose a partial solution—MakeWay Gators (MWG).
MWG utilizes the on-board (ERT) GPS and communicates with the team developed sensors at intersections to ensure safe passage of the ERT. MWG also informs the GPS devices on-board the traffic (if available) of the arrival of ERT to regulate traffic. Special signal intimates the traffic at intersection of the arrival of ERT. Sensors installed at rail-road crossings monitor the arrival of trains and either stops trains or provides detour to the ERT. MWG will be deployed in smaller sections and integrated to provide solution to a city. MWG will be tested extensively to be fail-safe since its application is safety critical and for its performance.
Motion Safe Systems (MSS)
Oregon State University
Transportation safety is a major concern in today’s society. According to the World Health Organization, 1.2 million people are killed, and as many as 50 million are injured, in road accidents around the world each year . Our team proposes the creation of a wireless network that communicates among nearly all types of vehicles on today’s roadways to create not only a safer traveling experience but also a less stressful one as well.
By providing a way for vehicles to communicate between each other, we can save lives and decrease the number of vehicle accidents. Our product MotionLink will reliably provide informative alerts about current surroundings to the operators of all types of vehicles including, cars made after 1996, bicycles, and motorcycles. MotionLink is anchored by an embedded sensor platform we’re designing that will communicate by radio using a wireless mesh network protocol. This will make the roads a safer place by giving drivers real-time information updates from other MotionLinked vehicles.
University of Massachusetts-Lowell
Our team proposes to design a mobile robot for hospitals that delivers medications through task scheduling. Taking into account the patients medication type and dosage, our system will automate a process that is currently time consuming and prone to human error. This process for scheduling and dispensing medicine requires hospital pharmacists to fill prescriptions, label the medication with patient’s information and indicated directions, and finally gives the medication to the patients nurse and care team who will dispense the medication then monitor the patients. With hospitals being extremely busy there is too much room for human error due to the small details that have to be carefully checked. If errors are made, patients can suffer threatening side effects that in extreme cases can lead to death.
Our proposed design, Mr. Meds, will showcase a new mobile robotic platform designed to lessen human error associated with medication scheduling and dispensing. Using cutting edge technology such as face and voice recognition, Mr. Meds will have the capability to interact with patients making the process more enjoyable. Interfacing through an Android mobile phone, Mr. Meds will also make the job of a pharmacist more streamlined.
Arizona State University
Robotic systems are predicted to become an essential part of everyday life. Currently, modern manufacturing industries are heavily taking advantage of robots to facilitate and speed up the process of assembling fine products and parts. Advances in robotic research in academia are pushing the robotic industry forward with a great pace. Universities are getting more involved in various aspects of robotics research. Each day, new possibilities appear for robots to improve the quality of life for Americans, even if they don’t know it.
The Defense Advanced Research Projects Agency, or DARPA, government military research agency has created a list of Robotics Challenges for universities and researches to accomplish. One of the main focuses of these challenges is to create frameworks that allow robots to help in disaster relief efforts. The NAO Navigators have taken on the challenge of “teaching” an autonomous humanoid human robot to use a human vehicle. This framework will be demonstrated using a NAO humanoid robot and a small electric car. The hope is for this framework to be used to create human sized robots that can commandeer any type of vehicle.
Worcester Polytechnic Institute
The aging population of the United States is creating a growing need to provide assistive care for elderly and people with disabilities. As the Baby Boomer generation enters retirement, the ratio of caregivers to those that require assistance is projected to decrease. There are currently no commercially available modular assistive robots that can fill this need. Our project aims to provide an alternative to current assistive living options through the development, construction, and testing of a Personal Assistance Robot (PARbot) that allows individuals with general or age related disabilities to maintain some aspects of their independence, such as the ability to shop.
Our unique solution implements a design that allows user oriented customization. Modularity is a key component in the design to allow for future expansion and potential user customization. The robot will be designed to ADA specifications to ensure that it can operate anywhere the user desires. Human Robot Interaction (HRI) will be an important aspect in our project; users should feel comfortable in the presence of the robot. PARbot will be capable of navigating in public areas, such as a grocery store, and utilize a Simultaneous Localization and Mapping (SLAM) algorithm for navigation while tracking its user.
RISE (Real-time In-Home Stroke Rehabilitation System)
University of Massachusetts-Lowell
Stroke is one of the leading causes of death with over a hundred thousand deaths in America alone every year. According to the Centers for Disease Control, over seven hundred thousand suffer from a stroke every year, and the chances of one occurring doubles every ten years over the age of 55. While physical therapy benefits stroke victims, it can be inconvenient and costly to many, especially to the elderly patients. Fortunately, the emerging of new gaming devices (e.g., Kinect, Wii) opens an unprecedented opportunity to fundamentally transform traditional stroke therapy by developing new techniques using low cost and unobtrusive devices. The team proposes to do this by designing a computer-aided medical rehabilitation system called “RISE – A Real-time In-Home Stroke Rehabilitation System Using Inexpensive and Unobtrusive Game Devices”.
Utilizing the new sensors in the Kinect, the team aims to build an efficient and cost effective rehabilitation system powered by the Intel Atom board that can be used in the patient’s home. The team will design a system that accurately detects and tracks patients in various positions. The collected data can be assessed by the system to determine the patient’s progress and can be viewed by the physical therapist.
SAFE (Situational Awareness Fault‐Finder Extension)
Portland State University
A majority of motorcycle accidents are the result of other vehicles failing to detect the presence of the biker. Besides dressing brightly, the cyclist must maintain a high situational awareness and drive defensively to compensate for their lack of conspicuity. An intelligent system that enhances an operator’s awareness could prevent fatal accidents. Such a system could also assist the operator of a larger vehicle to notice the motorcyclist. Furthermore, distracted drivers could be alerted to potentially dangerous situations.
We propose building an affordable Situational Awareness Fault‐Finder Extension (SAFE) device, which could be used with automobiles, motorbikes, or bicycles. SAFE would be able to track lateral and posterior vehicles, monitoring their relative speed, position, and acceleration. An overhead representation of the surrounding traffic would be displayed to the user, with hazards and their probability being color‐coded. A vehicle slowly drifting into the operator’s lane might be amber, while a speeding vehicle about to rear‐end them might be red. Similarly, a driver inadvertently turning into another vehicle would also be notified. External lights would be lit to alert other vehicles of the impending collision. Finally, a SAFE device would integrate both day and nighttime operation, enhancing the user’s awareness whenever needed.
Sensitive Calligraphy Robot
Worcester Polytechnic Institute
Our goal is to develop a control system that utilizes both precision position control and precision force control by developing a robot that will write in calligraphy shaded scripts. Individually position control and force control have tradeoffs. While position control can facilitate extremely precise motion paths, it tends to require very stiff end effectors that are prone to contact instability when they attempt to interact with an object. Force control facilitates ease of interaction but often comes at the price of increased uncertainty in regards to the position and motion paths.
There are many potential applications for a combined control system. In particular, surgical robots that make incisions with scalpels require precision in both position and force application. This problem is an open research question with an ongoing conversation regarding possible solutions and implementations.
Calligraphy provides an excellent context for a proof-of-concept that can then be transferred to other. We will utilize Series Elastic Actuators (SEA) to introduce compliance and precision force control while fulfilling the need for smooth, fluid, and continuous movements.
Smart Energy Micro Grid (SEMG)
University of Houston
Energy is a major concern in today’s society. When considering the high cost of fuel, the rise in air pollution, global warming and the low efficiency of power plants, it is clear that society needs to rethink the way it consumes energy. However, the process of shifting from the current model to a new one, such as micro grid systems, is very challenging due to the complexity and cost of new systems.
As a partial solution to the issue, this team proposes the creation of a Smart Energy Micro Grid system or SEMG. SEMG will be a multitask system capable of controlling the power distribution process, protecting the grid, ensuring the efficiency of the system, and monitoring overall power usage across multiple sources. With these properties the SEMG will act as a universal control system with a broad range of capabilities. As a result, the SEMG will lower the cost and complexity of micro grid systems.
Smart Robotic Prosthetic Hand
Worcester Polytechnic Institute
Though the prosthesis industry has experienced a great revolution in upper body prostheses with the introduction of advanced myoelectric grippers, users are still far from being able to complete everyday tasks with the same level of ease they once could. Complaints of available products like the i-Limb and Bebionic hand include difficulty in performing tasks due to complex user interfaces as well as a high market cost that makes the technology especially expensive to people without insurance. As a solution to these issues we propose the creation of a Smart Prosthetic Hand – a semiautonomous robotic prosthesis capable of determining the most appropriate grip for grasping an object then executing that grip.
The Smart Prosthetic Hand will be an anthropomorphic prosthesis with independently movable fingers capable of executing a variety of grips. Autonomy will be achieved through the use of a unique control system which will take input from sensors in the hand to determine the shape of an object, the position of each finger, and quality of grip. Ultimately we will work with professionals and potential users to determine whether these features make the device easier to use and more effective.
University of Illinois at Urbana-Champaign
Canines can be thought of as incredibly complex machines with efficient and sensitive data acquisition and processing, as well as adept navigation of unknown environments. As such, they are incredibly valuable for various service animals roles in the community and law enforcement. However, we believe that such capabilities and data can be augmented and improved to add further value to the animal-handler relationship.
SmartCollar is a wearable computer and data acquisition system intended for service and rescue dogs in the field. SmartCollar will enable new modes of communication and data streams to be sent to handlers for further processing. Useful information can be obtained and processed a sensor array. These can serve to provide additional sensing capabilities and greater fidelity of the dog-provided data to the handler, allowing for aggregate human-in-the-loop control of multiple agents.
In a search and rescue function, a single handler can provide remote verification of the dogs findings and provide feedback with further instructions. In a service animal role, dogs can detect various medical emergencies and can contact 911, enhanced with biometric notes and location data. We see this platform as a generalized means to increase the fidelity of the data that handlers receive from dogs, as well as a method for the animal to communicate in an internet of things.
University of Houston
“Practice does not make perfect. Only perfect practice makes perfect” – Vince Lombardi
Basketball players practice their techniques, approach, and overall body positioning while attempting a shot. The problem with the current way athletes practice is that a shot can be made with bad techniques. This constant justification of bad behavior confuses the body and causes our muscle memory to never truly set in. We are proposing a jump shot analysis system based on inertial measurement unit (IMU) motion capture to gain a better understanding of the actions performed in successful shot attempts.
A thorough breakdown of every movement and muscular signal, using electromyography (EMG), will allow us to relay critical information to athletes enabling them to truly practice perfectly. While there are systems that track the angle of the ball’s trajectory, no one has created a system for full body analysis. Any IMU motion capture suit in the market today is unavailable to most consumers due to cost. We will compete with the competitors’ prices, as well as provide more useful information regarding the entire shooting process, possible corrections, and real-time analysis of the players. This will become a one-on-one coach for every player.
According to the World Health Organization (WHO), around 40 to 45 million people in the world are totally blind. Due to longer life expectancies there is a considerable increase in the number of people who are blind and this number is expected to double by 2020.
As a result, in recent times, there has been a lot of focus on designing intelligent systems that enhance the lifestyle of the visually impaired. However, the majority of existing systems mainly addresses the issue of obstacle avoidance and provides crude navigational guidance. There are only a few solutions that try to give the user a holistic perception of his/her environment. Hence, it is beneficial to pursue the design of a system that assists the visually impaired in outdoor navigation that can eventually enhance the way they perceive the environment around them.
The proposed project — called the Interactive Vision Assistant or “I.V.A” — aims to design an interactive system to assist the blind in outdoor navigation, by offering intelligent feedback about the surroundings in addition to providing directional guidance. Besides navigational guidance and obstacle avoidance, this project incorporates tasks such as Human Detection and Proactive Feedback.
Team REBS (Remote Emergency Biomonitoring System)
University of Colorado-Denver
Remote Emergency Biomonitoring System (REBS) is a wireless vital-sign monitoring device combined with a scalable software interface that is designed to meet and exceed the current challenges of remote vital sampling and collection methods. Using a non-intrusive lightweight vest, wristband and the ubiquity of a portable WiFi network, REBS can provide a cost effective way to monitor a large number of patients in emergency scenarios. In such scenarios, REBS is a viable option for an emergency triage system, where a large number of victims with different levels of medical needs can be monitored and addressed in a timely manner. The REBS system can also be used to solve issues in current outpatient, however, most current health monitoring equipment is often expensive to rent or too large to be comfortable to the individual wearing it. Overall, the REBS design goals are to leverage today’s microprocessor System-on-Chip (SoC) systems to demonstrate how a monitoring device can be low-cost, portable, and scalable to a number of sensors. As such the REBS design will integrate a pulse-rate unit, accelerometer, and a full heart monitor (including a blood pressure monitor and an ECG unit), for less than the cost of available ECG systems.
Team SPARC (Solar Powered Aquatic Research Craft)
University of Pennsylvania
Researching the world’s oceans is critical for understanding biodiversity, climate change, weather patterns, and more. Unfortunately, collecting this data takes dedicated ships and full-time crews. Due to the high expense involved with this approach, we have very little data about our own oceans.
Team SPARC (Solar Powered Aquatic Research Craft) wants to fix this problem by building an autonomous craft capable of collecting data from long voyages and transmitting this data back to scientists. A key feature will be a detachable, tethered sensor module which can be lowered into the ocean to collect data from different depths. This will set our craft apart from existing autonomous models, which can only return data from the ocean surface. Another particular focus will be making the system cost-effective, so that research teams from around the world can afford to buy and use it.
To understand exactly what researchers need, we’ve initiated contact with scientists at a few research institutes and universities. We plan to frequently get their feedback to ensure that our autonomous craft will be truly relevant and useful. By March, Team SPARC plans to perform a week-long test voyage to verify the success of our design.
Pennsylvania State University
Earth is our home. Around 72% of the earth’s surface is covered by oceans. The oceans continuously undergo significant changes driven by numerous internal and external forces. Over the past century and a half, we humans have had a seriously negative impact on the oceans. The short and long term effects of human intervention on the oceans such as pollution, greenhouse gas emissions, oil spills, and radioactive leaks is still not fully understood. Today, there exists a vast variety of means for studying the ocean’s changes including research vessels as well as expensive scientific floatation devices.
We propose a new system for conducting oceanic studies with devices called the AUTOCOs. AUTOCOs are a system of low-cost autonomous ocean climate observers packed with an array of sensors designed to drift with the ocean currents and deliver real-time data to a centralized computer. The real-time data will be queried and cross-referenced with astronomical or human-caused events as well as weather conditions. This data could then be obtained freely via our APIs which would be used by scientists and researchers around the world to aid their study of the ocean and climate changes.
University of Illinois at Urbana-Champaign
Blindness is the condition of lacking visual perception due to physiological or neurological factor. There are about 60,000 legally blind children (through age 21) enrolled in school and 6 million legally blind adult living in the United States as of 2011. They usually depend on canes to avoid potential dangers, and gather information of object in distance. However, the trigger sensation provided by the cane, fails to offer well rounded perspective on the surroundings. To solve this problem, our team suggests building a Sensing Glove that is capable to providing greater sensory details of the surroundings.
The inside of the glove will be mapped with controllable electric grid. Each cell in the grid will be electrically heated according to the graphical information from two add on cameras. These two cameras will provide depth and shape information of the object pointed at. Through image signal processing, with edge detection, and depth calculation, we will be able to model the same structure through compressing and expanding certain cells on the glove. The person wearing such glove would be able to feel the relative distance away from the object, its edge and its shape, thus providing real time, sensible, multi-perspective information about the surroundings.
Blog: no blog as of 3/3/14
University of Rochester
Tablets and e-readers have become a popular staple of modern life. These devices rely completely on visual output; if a person is blind, they have no way of using these devices, and current have no parallel options. While there do exist some options that can dynamically display input, for example the BrailleNote1, these devices are limited to one line of braille, and cost upwards of 4,000 USD. The difficulty of creating such a device lies in the challenge of efficiently producing a large amount of braille sized pins2, capable of moving up and down. We propose a solution that would allow the production of a large array of pins at the scale required using solenoids creating with a printed circuit board.
University of Akron
The team’s project is to design a portable variable digital audio processor that can use an audio input to generate new, user-defined sounds in addition to being able to apply all of the standard effects found in current effects systems. The audio processor will use a brain-computer interface and a tablet or smartphone to control the various parameters of the sound being created. This system will solve many of the issues experienced by musicians when playing live, such as a lack of stage presence due to being tethered to an effects system by a cable and the inability to recreate sounds that were made in the studio. The system our team is designing will be a completely new way to control live sound and instrument effects combined with a robust digital audio workstation that can be used to generate completely new sounds based on the live audio input signal.
University of Pennsylvania
Despite severe national staffing shortages, nurses and physician assistants, in addition to having exigent workloads, collectively spend over 5 million hours per day measuring patient vital signs and recording information into hospital databases. Nurses use nearly 20% of a typical workday documenting patient data; however, embedded systems provide a more elegant, efficient alternative to manual data acquisition and processing. To partially relieve professionals of these cumbersome duties, our team proposes an unobtrusive sensor cluster designed to be worn for extended periods of time and capable of measuring and tracking several physiological vital signs.
VITAL will offer this functionality through an array of low-power sensors and wireless communication modules, contained in a comfortable, aesthetic design. Continuously transmitted data will be parsed and recorded via the provided embedded platform with custom algorithms that will calibrate sensors and track physiological changes: detecting health patterns and pinpointing users in need of immediate attention. Our novel approach is to design a system that is affordable, comfortable, accurate, and capable of communicating with a centralized database. Ultimately, VITAL will be tested in hospitals, outpatient clinics, military operations, catastrophic events, and large gatherings as a “mass triage” device and viable alternative to manual recordings.