ARM (Autonomous Robotic Mechanism)
University of Massachusetts, Lowell
Independence is a large factor in quality of life, however not everyone has the ability to be independent from some of life’s most basic tasks. For example, eating is a vital part of everyday life, and those who cannot feed themselves look to health care providers to aid them in this task. The problem we intend on solving is to provide independence to people who cannot feed themselves. Our solution to this problem is the design of a robotic feeding arm that can be manipulated through various user interfaces such as push buttons, head movements, and facial recognition. This solution allows independence to its user, and frees up staff who feed their patient by hand.
Autonomous Airborne Vehicle (AAV)
University of Pennsylvania
The use of Autonomous Airborne Vehicles (AAVs) for military applications, remote sensing, weather forecasting and commercial aerial surveillance is becoming more prevalent. AAVs can be implemented in reconnaissance missions, intelligence tasks, and search and rescue assignments. Our team proposes to develop a robust autopilot system for an AAV that is modular, reliable, and usable in a broad range of applications. The autopilot will contain a stable, lean code base that is portable to multiple hardware and software platforms depending on its desired use.
The AAV will be controlled either remotely or by an onboard autopilot system to fly a stable predetermined path entered by the user. The flight control system guides the flight of the AAV in autopilot mode, by using data from external positional sensors, calculating heading and orientation of the AAV, and sending signals to the physical mechanisms on the vehicle. Since safety is a critical issue in these applications, the flight control system will also be responsible for detecting any flight failures and consequently reverting to a preset recovery system. In conclusion, the AAV will perform the basic functionalities of automatic flight consistently and accurately, contain a portable autopilot system, and be tested thoroughly.
Assistive Robotic Manipulator (ARM)
There is an increasing demand for robots in the home – especially for people who require assistance with activities of daily living. Unfortunately, most assistive robotic arms on the market are beyond the budget of those who require their assistance. Our aim is to design a relatively inexpensive, human friendly wheelchair mounted robotic arm (WMRA).
The proposed WMRA will make use of series elastic actuators and will require the design of a control system that will allow it to “feel” around its environment without the need for vision. The series elastic actuators will provide compliance at the joints for safety and will be combined with potentiometers to convert angular displacement into torque feedback at each joint. To facilitate the assistance provided by the manipulator a non-invasive brain computer interface (BCI) will allow users to plan manipulation of objects. Additionally, the team hopes to make the physical design of the WMRA as open-source as possible, which will allow other universities and individuals to download the design and improve the capabilities of the arm.
Arizona State University
Applications of artificial intelligence research like IBM’s Watson and Apple’s Siri have been inspiring engineers to rethink the boundaries of human-technology interaction. Each of these projects, however, has their drawbacks. Watson is not a feasible option for those who cannot support the extremely high cost, and Siri, which is task oriented, does not support general interaction, questioning, and knowledge acquisition. Real life applications require advanced technology that is affordable. Users need speech to be an option for interacting with intelligent systems, not just text, and the system’s responses should be prompt yet accurate. Our project will combine the most advanced speech recognition algorithms, the most human like simulated conversation techniques, and the latest 3D graphics developments to create an intelligent entity that satisfies these requirements. This ambitious project is made possible by the fact that we can accelerate existing software algorithms in hardware. The Altera Cyclone FPGA on Intel’s DE2i-150 board gives us the ability to implement this acceleration, thus speeding up the responsiveness of the entire system, while keeping our solution cost effective and power conscious. This unique system will create an exciting and effective alternative for human-technology interaction.
Many drinking water sources throughout the world are contaminated by various pathogenic bacteria. When pathogenic bacteria are not properly detected it could lead to illness and death to individuals drinking from these sources. It has been reported that waterborne pathogens cause 10-20 million deaths each year worldwide. From these numbers the assumption that current testing and purification methods that use chemical, manual and mechanical approaches seem to be inefficient. Their inefficiencies could be a result of them being time consuming, unsustainable and/or inaccessible. A technically appropriate (easy-to-use) sustainable automated device that can produce clean drinking water would be an addition to the progress of finding a solution to accessible clean water sources throughout the world. A prototype will be designed and developed to integrate ultraviolet radiation, biosensor circuits, photovoltaic cell systems and Intel’s microprocessor to produce clean water in developing countries and remote locations throughout the world.
Cyber Physical Systems
Worcester Polytechnic Institute
Our goal for this project is to provide individuals with locked-in syndrome the ability to live more independently and improve their overall quality of life through the use of a semi-autonomous wheelchair and Body/Brian Computer Interface. Although semi-autonomous wheelchairs have existed for the past decade, no commercially feasible solution has been presented due to the high costs associated with commonly used navigational sensors (such as LIDAR) and closed design frameworks that are often difficult to reproduce or expand upon. Our project is unique in the sense that our proposed solution revolves around cost-effective, modular sensor packages that can be easily mounted to a wide variety of commercially available powered wheelchairs, thus allowing for large scalability and ease of assembly. With this cost-effective and modular design in mind, the project team is working to create a product that will help advance the framework for design of various cyber physical systems. Our sensor suite is comprised of infrared and ultrasonic sensors used in tandem for low-level obstacle detection, simple-to-mount optical encoders used for acquiring odometry data, a Microsoft Kinect used for visual imagery, and a Body/Brain Computer Interface for sampling and processing a user’s EEG signals. The data acquired by these sensor packages will be processed by a high-level intelligent agent that will implement SLAM (Simultaneous Localization and Mapping) and allow for safe and reliable indoor navigation.
Worcester Polytechnic Institute
As one of the finalists entered last year, Fivolts team designed and realized the Drowsiness Control System. This year, we will take steps further to design a multi-sensor, personal wireless biosignal sensing device. Individual health status and activity level are critical in many situations including athletic events, public transportation- senior care and etc.
Therefore, we proposed a solution that includes the wireless sensor set, a mobile application to process the multi-channel signals, and a monitor system embedded onto the Intel Atom Board to visualize the status of the users along with their geo-locations. This system will display not only the drowsiness level of the users, but also their medical status such as cardiac-alternans, blood pressure and etc. Major engineering improvements will be seen in secured communication and data storage, implementation of artificial intelligence algorithms in drowsiness level estimation and seamless phone application for fetching signals. The administrator of this system will be able to see a map of users with their health status instantaneously. However, all the original data will be encrypted and managed in the hard drive of Atom board.
University of Colorado, Denver
The proliferation of cellular phones and mobile devices has shaped all aspects of communication, learning, and entertainment. However, traditional cellular phone service methods still struggle to provide adequate service coverage under various geographical and architectural constraints. Our project solves this issue with inexpensive, networked cell phone transceiver nodes that function together as a local extension to the global cellular network. Each transceiver node is a small single-board computer, about the size of a Wi-Fi router. The network connection provides a digital path for calls, bypassing the geographical and architectural constraints that would normally prevent coverage.
Our low-cost, modular approach to cellular phone service fills the gaps left by traditional distribution methods by taking advantage of emerging technologies of high-performance system-on-chip (SoC) architectures. Our proposed solution has the potential to work independent of existing cell networks or could be used to enhance existing system infrastructure. As an independent system, our solution enables the democratization of cellular phone service, by empowering communities to provide their own coverage. As an enhancement to existing systems, our solution provides an inexpensive alternative to new cell tower construction. A successful demonstration would allow a cellular phone user seamless integration between the provider network and our system.
University of Massachusetts, Lowell
Lower back pain (LBP) is among the most common neurological conditions in the world. According to the Arthritis Foundation, 50-80% of people in the U.S. are affected by back pain at some point in their lives.
Lower back pain can be attributed to several disorders such as arthritis, sciatica, and spinal stenosis. In some extreme cases, LBP can affect a person’s standard of living by making tasks more difficult. One of the specific tasks targeted by the team is yard work.
In regions such as New England, where weather is known to frequently change, yard work can become tedious—especially for those who suffer from LBP. Raking leaves is an example of a simple task made difficult for those with spinal and LBP conditions. The team proposes to use the Atom board to design and modify an AI based robot that (1) aids those already suffering from LBP and (2) attempts to reduce further back complications. The Leaf Bot is a robot that accomplishes this by using leaf recognition technology to eliminate the need for manual leaf removal services.
University of California, Berkeley
Shopping cart has been a great helper for customers since its invention. However, the entire shopping process is still purely manual, although the technology of computer and electronics has evolved significantly. This traditional shopping cart is not convenient for some special social groups such as people with disabilities, pregnant women and seniors. Also, it is just a carrier, while it could provide much more additional benefits for customers and supermarkets.
In order to solve this problem and revolutionize the shopping experience, we propose to invent a new “Intelligent Cart” by utilizing computer vision, wireless networking and automation control. Together with supporting infrastructure, the Intelligent Cart can automatically follow its owner inside a supermarket. After the customer puts the groceries in his/her vehicle, the cart will return to the collecting point by itself. Besides, we will build an application on smartphones, allowing the user to command the cart to get the selected items without human’s supervising and return back to the customer automatically.
Our solution will enhance customer experience and decrease labor cost. It will not only make a shopping revolution for normal people, but also create a new way to self-shopping for some people who cannot easily handle manual carts.
Oregon State University
Modern vehicle interfaces are becoming increasingly complex since there are so many adjustable aspects to a vehicle, but this complexity has moved the driving experience away from the road and has created distractions for the driver. Distractions like these lead to thousands of traffic related accident each year. The sole purpose of the driver is to pay attention to the road and drive as safely as possible, while also have an enjoyable driving experience. However the driver does need to make adjustments to the vehicle from time to time, by why should it be sprawled all over the dash and center console? Why can’t it all just be adjusted from directly in front of the driver, via the steering wheel?
Our project is to enhance the driver experience by making a simple interface which is concentrated in one region – the dash board and steering wheel. In addition, moving the controls closer to where the eyes look at the road reduces distractions and creates a safer driving experience. We will do this by creating a unified interface which controls all major sub-systems of a car and reports back the state of critical systems.
Oregon State University
The objective of this project is to provide a complete, affordable home automation solution by transforming the house into a ”smart home”. Although the idea of a smart home is not a new one with products such as INSTEON and Control4 already available, installing a complete home automation system today can cost hundreds, if not thousands of dollars. Our goal is to provide consumers an affordable way to control and monitor many aspects of their home from a universally accessible interface that incorporates an intuitive user interface available on any internet connected device.
Our system will provide users with a low cost home automation system that is easy to install, configure, and control. The system will allow users to manage the devices, outlets, and power consumption throughout the home from a user-friendly interface. The system will offer users a way to monitor the current and power dissipated by each appliance connected through a system node. By allowing users to track their power usage, configure energy-settings, and remotely control their home environment, we hope that our system will help consumers save energy and encourage greener lifestyles.
Seattle Pacific University
According to the U.S. Department of Education, National Institute on Disability and Rehabilitation Research, roughly 155,000 Americans currently use electric powered wheelchairs. Many of these wheelchair users suffer from debilitating conditions that render their motor functions useless, leaving them completely immobile.
Most motorized wheelchair designs require some form of movement by the user to operate. This not only eliminates those who lack the use of their arms, but also reduces the number of tasks that can be completed concurrently. Those who lack the motion of their arms are left with very few options and are outfitted with devices that are intrusive and uncomfortable: such as systems that use the tongue or lips to control. We believe that the solution to this problem is the NIA Wheel.
The NIA (Neural Impulse Actuator) Wheel is a less intrusive, more capable system driven by brain signals sensed by a NIA. With the NIA Wheel and a little bit of training, users with severe mobility impairment will experience an easier, more comfortable wheelchair system.
As of 2011, more than 13% of Americans are at least 65 years old and 29% of those living at home do so alone. Independent living becomes more difficult with age and once trivial tasks, such as transporting items, require assistance. The current industry’s elderly assisting robots come with expensive computer personalities and complex interfaces which are counterintuitive to the elderly user and fail to provide human-centric physical assistance. Our team proposes a humbler solution in Alfred: an intuitively controlled, mobile, self-balancing platform with voice over IP capability. Alfred provides lifting capability for up to 50lbs and an auto-stabilizing tray for items requiring more finesse during transportation.
Alfred’s advantages stem from autonomous and intuitive motion controls that can be used to navigate and control the platform. Force transducer arrays on the platform’s circumference translate a touch input into omnidirectional motion or a change in platform height. Additionally, an infrared camera allows Alfred to detect and autonomously follow the user where space is constrained. A dynamic system balances and raises or lowers the platform, stabilizing the payload when Alfred encounters surface transitions, surmountable obstacles, or external forces. Alfred’s mission is intuitive assistance and elderly acceptance.
University of Pittsburgh
Dementia is the sixth-leading cause of death in the United States. Payments for Dementia care are estimated to be $200 billion in 2012. To reduce the cost and improve the care quality, we propose to develop a wearable electronic unit and associated software platform for Dementia Care. The system is called PandaCare which consists of an electronic button and wristband. The button is a miniature wearable computer in a normal look chest button form. It contains a large variety of sensors including cameras, GPS sensor, accelerometer, gyroscope et.al, which can perform indoor, outdoor location, fall detection, wireless real time communication et.al. In order to keep track of patient’s general health condition, a wristband is designed to keep monitoring patient’s physiological signals, such as the body temperature, ECG signal, and respiration rates. Our system will operate in a fully automatic fashion where the Patient is required to do nothing more than wearing the PandaCare unit. This device provides adequate information for the caregiver (family members or care facilities) to keep good track of the patients, or even to understand their health, safety and psychological needs, and provide help when necessary with much less cost than the current Dementia Patient care systems.
Florida Institute of Technology
Many people are lost at sea each year. In most cases the positioning of the ship is the last confirmable data that can be supplied in the search effort. This project offers a solution to finding the life rafts other than the current method used. The new method will save helicopter fuel and hours that the pilots spend searching for the lost vessels. This project will create an autonomous vehicle that will be taken to the last known location of the ship, or the best place to search due to data received from SAROPS, a simulation program that finds the location of highest probability for the raft. Once on site, a group of 4 small autonomous crafts will search quadrants with the last location at the origin. Once arriving at the signal, the vehicle will send its GPS data to the base ship for a more direct rescue. This will allow for the search vessel to only make one trip to the lost raft as opposed to a constant search grid being flown by Helicopters. For this project a single ground vehicle will be created as an analog. It will perform the above mentioned tasks.
Personal Black Box
University of Massachusetts, Amherst
It is not uncommon for people to find themselves in dangerous or possibly life-threatening situations in out-of-the-way locations. Victims of crime and accidents need a trustworthy form of evidence to bring justice to those who have harmed them. Unfortunately, traffic and security camera networks can invade people’s privacy and often have blind spots. To abet these situations, our team will provide the “Personal Black Box”, a portable, personal security device. This instrument will continuously record audio and video streams of the surrounding environment, without revealing its presence, and store recently recorded information on command. Ideally, this information will be thorough, tamper-resistant and secure enough to be used as evidence in a court of law. Our solution will be different among the market in that similar products are much too expensive for the common person to afford, and most solutions do not have the encryption capabilities our product is designed to have. Hopefully this device will bring a greater sense of security and confidence to potential victims and acts as a deterrent for potential criminals as it increases the chance that they will be held accountable for their crimes.
University of Pennsylvania
If you were to drive your vehicle today from Ithaca to Manhattan, Google or Garmin maps would give you the time and distance. In the case of an electric vehicle, with a limited 80-100 mile driving range, we need to know the KWhr energy requirement for the trip. This depends on the changes in elevation, the stop-and-go due to traffic congestion, the friction of the road, and so on. Our goal is to develop an experimental test-bed for electric vehicle drive cycle simulation and optimization of on-board energy management. This will eventually lead to Google Maps EV edition (with KWhr estimates for different vehicles and routes passing battery swap stations).
Using the Intel DE2i-150 platform, we will develop ProtoDrive, a desktop-sized electric vehicle platform capable of simulating different battery-super capacitor scheduling schemes to maximize the lifecycle of the battery and also increase the vehicle range. It consists of a physical model of an electric vehicle power train (motor, controller, battery, super capacitor) couples with an active dynamometer, making it possible to run the power train through its full speed and torque range. Our solution is unique since we are scaling down the battery voltage levels in order to configure the system with different vehicle parameters for various types of cars and trucks. This will make it feasible to investigate the use of a hybrid battery/super capacitor system in response to real commuter drive cycles and to develop scheduling algorithms that optimize the flow of energy between the battery, super capacitor and motor.
Southern Illinois University
More is known about outer space than what is known about our planet’s oceans. Recent advances in nautical exploration via remotely operated vehicles have been useful in revealing our oceans depths; nonetheless, they are still costly to operate, requiring significant man-hours due to constant crew involvement and a support ship which can cost $17,500 a day to operate . An alternative low-cost option is needed for future oceanic studies; modularity and autonomy are key.
A proposed solution is the development of a self-sustaining autonomous underwater vehicle (AUV) capable of collecting large amounts of research data at a reduced cost by removing human interaction. Towards this end, the AUV must be able to both operate autonomously for a prolonged period of time and generate its own electricity. The minimization of power consumption through the efficient use of the Intel Atom and the capture of solar energy provide an effective answer to these problems. In addition, this platform offers adaptability which can meet the specific needs of diverse users. Exercising the AUV to its full potential will impact not only the methods of research and exploration humanity uses but will ultimately change the way we look at and interact with our planet.
University of Rochester
Using ultraviolet light to sterilize medical equipment is not new, nor is using small autonomous robots to clean the floor. What is new about this team’s submission is the utilization of a swarm system and the combination of those two concepts. The swarm UV bot approach is more robust to failure, less likely to disturb hospital staff, and more scalable than either of the concepts it is based on. On an individual robot, a shielded, flashing UV light is set on the bottom of the robot facing down. The robots work together to sanitize large surfaces in a more rapid time frame than other methods. The system includes many modern technologies in an attempt to achieve this goal. Using RF transmission to connect each of the smaller robots to a larger Central Hub, swarm methodology and computational tasks is proportionally split. Also aimed at charging the smaller robots, the Central Hub is an effective base station for the combined system. It is the hope of this team to provide this passive method of disinfection, as it remains more durable, adaptable, and user-friendly than other market options.
Collaborative work environments have become the new standard in almost every field, from engineering to legal to management. Rarely does the responsibility for a project fall entirely on one person. However, it is rare to find any sort of computer or presentation system which allows multiple people to control it and work in the same environment simultaneously. Table-It seeks to answer this need.
Table-It’s central purpose is to create a portable conference workstation and collaboration hub. In addition to basic desktop sharing and projection capabilities, it will also offer teams shared file storage and version control of created files. It is based around a simple gesture interface, for operations such as interactive physical object digitization and menu navigation.
Uniquely, it will also meld audio and visual recordings of the meeting with the hard copy imaging, file creation, and version editing that took place at the associated times, creating a complete meeting record that can be opened and reviewed at a later date. In this manner, Table-It will allow teams to work quickly and effectively together, while also keeping a complete record of everything done and discussed.
University of Massachusetts, Lowell & Padmasri Dr. B.V. Raju Institute of Technology (BVRIT)
In countries all over the world, human lives are being needlessly cut short by various contagious diseases. These diseases are caused by various types of harmful bacteria and viruses present in the air we breathe. There are some places people are more susceptible to infection, such as a hospital surgical environment. Our goal is to design an automated device capable of quantizing the infectious agents in these areas, providing much-needed information to the proper personnel.
By leveraging current development in biosensor technology, we can design virus and bacteria sensors that can quickly determine the quantity of these infectious agents in a sample volume. Creating an autonomous sensor package will allow hospitals and other susceptible environments all over the world monitor and understand the risk of infection. The mounting of this device on a robotic platform adds functionality while reducing human presence in an area. Utilizing the power scaling functionality of the Intel Atom processor, the whole system will be controlled and run off of the robot platform’s existing battery.
University of Houston
In 2011, 81 American firefighters laid down their lives in an effort to preserve life and extinguish a burning structure of forest. These deaths can be prevented with more information concerning the exact nature of the fire in the structure or forest. What is fueling the fire? What is the structural integrity of the supports? How hot is the fire? Are there people still inside the building? Armed with this type of information a firefighter can make more informed decisions about how to attach the fire, where to enter, where to enter, where to find the person and other critical choices that mean life or death for the people in the blaze. University of Houston’s Team Ignitus has been confronted with this challenge. The creation of a robotic device that can help with the preservation of life in these dangerous situations and collect and transmit data to a team will be the solution to this problem. This device will be manipulated by the firefighter via on-site remote control. The operator will drive the device into the burning structure while a computer will collect imagery and data concerning the fire. With this information, the operator can help guide the team through the safest possible entrance and exit. The more effective and efficient our men and women can negotiate the burning structure the more lives they can save.
For the elderly and disabled, dropping an object can be a serious issue. How would they pick things up if they were all alone? Our project aims to solve this issue by having a semi-autonomous robot complete this action for them. This robot, Boost, uses a video camera to display objects to the user on the user’s panel screen. After the user manually selects the object using the screen display, the robot will automatically retrieve the object using a grabbing tool and return the dropped object to the user through the use of an elevating platform.
Boost requires a video camera, wireless capabilities, a grabbing mechanism to pick up objects, as well as additional sensors to identify the location of the object and determine whether object retrieval was successful. Our solution is unique in that there is no similar robotic device in the market that can grab a variety of small objects, from keys to glasses, and return them to the user. The exciting aspect of this project is that it is a very practical solution for the elderly/disabled who are left alone for a period of time. Project Boost aims to improve their overall quality of life.
In our daily lives, we have generally come to rely on our memory, reminders on mobile devices, and physical checklists for making preparations to take on our daily tasks; These preparations include knowing the time and location of each task, as well as what materials/items will be required for each engagement. Despite memory reminders and checklists, we still often find ourselves in situations where we either show up for our tasks without required materials or leave important items behind as we move between tasks. Thus, a noteworthy imperfection to the working of a reminder is the reminder’s failure to monitor and crosscheck if a user packs all essential items.
For a user to avoid the frustration of losing/forgetting important items, Sigma’s project aims to create a smart backpack that accesses a user’s daily schedule, deduces the items required for scheduled tasks, and notifies the user whenever these required items are outside a certain range of the backpack. Sigma’s solution seeks to alert the user if any required item is missing by integrating the computational power and storage ability of Intel’s processor with an RFID reader and tags, an accelerometer, an LCD screen, and a mobile application.
Arizona State University
IBM’s Rodney Atkins recently wrote of America’s pressing need for more science, technology, engineering, and math (STEM) students, and declared, “We need to increase the size of the STEM education pipeline by maintaining an enthusiasm for science, technology, and engineering and math throughout high school and college.”  Ultimately, the best way to engender such enthusiasm is to provide students with opportunities to interact with technology and show them that even very complex technology is within their grasp to understand and control. This team proposes the creation of an advanced robot whose functionality can be broken down into simple services and manipulated by an understandable UI (such as Microsoft VPL). By creating this and bringing it to students to work with, the team can inspire them and help them realize their own potential. The aim is to make the robot’s services both simple and powerful, targeted towards pathfinding and patrolling functionality, which is both highly visible in its functionality as well as generously flexible in its potential. This will allow students programming the robot to bring significant creativity to bear, which this team believes will best inspire students to pursue STEM. Thus, this will be a unique solution which creates a complicated patrol robot that can be programmed easily by high school students.
University of Pennsylvania
Our team is designing an untethered, powered, upper body exoskeleton for use in the fields of rehabilitation and therapeutic application, as well as occupations requiring augmented strength. Though systems exist, past exoskeleton endeavors have led to bulky, expensive, invasive, and tethered solutions. The challenge is to build an exoskeletal system that is inexpensive, streamlined, and wireless.
Our solution is unique in that it will be a low-cost, ergonomic device actuated through sensors measuring the user’s motion and muscle activity. Through on-board sensing, the skeleton can provide rich data, such as range of motion or strength for use in physical therapy. This data can be used by doctors and patients to more accurately track improvement over time. With its low cost, hospitals could employ multiple devices and aid a larger audience of patients; the devices could even be used at home for physical therapy, which would dramatically increase quality of life for patients.
Outside of physical therapy, augmented strength is applicable to physically intensive occupations, as well as search and rescue operations. Each year, thousands of workers must take leave due to injuries triggered by heavy lifting; with augmented strength, workers could avoid harmful situations.
University of California, San Diego
The UAV Tracker will be a multi-modal automatic tracking system designed to intelligently track a moving target, in this case an autonomous aerial system. While small rotary wing UAVs can be tested in labs with expensive vision capture cameras, larger fixed wing systems cannot fly in such a confined space. This means that testing is more complicated, as the UAV operator cannot easily see the actual system state, and there is no complete record of flight to correlate telemetry with actual flight characteristics. In addition, with longer range comes the requirement of using a directional antenna, which would need to point at the plane to maintain a link. Plane and ground station GPS, signal strength difference, and computer vision tracking will be used in conjunction for reliable tracking. Our project builds on the common antenna tracker with the addition of computer vision, simultaneously improving the product’s robustness and extending its capabilities to aerial targets beyond those with GPS devices. While the intended use for the UAV Tracker is an autonomous vehicle, it could be used to provide a live plane view and record of flight for any aircraft, have its uses in ornithology, or can be extended to cinematography.
University of Rochester
It is not without reason that we often refer to the computer screen as a window looking out into a sea of knowledge. The reason is twofold: Firstly, we refer to computer systems as ‘looking out into a sea of knowledge’ because they provide us with a way in which we can access and parse through text, images, videos, music that have been posted throughout time and spread worldwide. Secondly, we refer to computer screens as ‘windows’ because most of the information we receive—text, images, videos—are accessed visually. To the visually impaired, this is a significant limitation of screen systems. The goal and challenge of our project is to meet this need.
The project aims to create a refreshable braille display that acts as a screen for the blind. The braille display will be able to read in text and pdf files and output the result on the screen through a tactile display. Mimicking the traditional hole-punched paper braille books, the braille ebook will be able to give those who cannot access pdfs, a whole new library of information. If this project is successfully implemented, it is the first step to opening up a new library of information.
VIOS: Vision Interactive Operating System
University of Pennsylvania
The goal of this project is to change the television viewing experience into an immersive and interactive activity that is tailored to the viewer’s interest in the least invasive manner. Using the Intel DE2i-150 platform, we will develop the TV Set-top Box of the Future that is capable of (a) in-video product embedding, tagging and linking for less invasive continuous advertisement and more effective user interaction so that a user can click on many objects in the TV show to get more information or be directed to the product purchase site; and (b) Integrate physical aspects such as lighting, haptic feedback and synthesized sound effects so that the entire room is activated when an appropriate context-aware video signal is received (e.g. When the Yankee’s hit a home run, the sports bar lights up and experiences rumbling sound effects).
Our solution will focus on the authoring and runtime system for such embedded digital content; the design and development of the overall set-top-box system architecture; and demonstration of the integrated system with popular TV shows. For the Interactive Advertisement application the user will be able to purchase products online directly from the TV show. Our efforts will be on digital video authoring tools for object definition, tracking, embedding, tagging and linking to related product content. This interaction will be supported by on-screen graphics and interactive remote control. The Immersive Ambience application will deliver a rich viewing experience with autonomous solutions like gesture recognition, appliances control, mood lighting and integrating it all on one box. In this way we can merge the virtual and the real world while demonstrating the power of the platform.
This new project is done in close collaboration with Comcast Cable’s Office of the CTO.