Some of these projects are difficult to describe in a one-way presentation, so you may benefit from talking to me about them. I have tried to organize them into categories for you to more easily browse through them. That said, I highly recommend that you take a look through everything. In any event, some of my preliminary project ideas are outlined below

The following are product based projects where you I would expect you to conduct a product development cycle much like a company startup activity. That is, you would need to develop a basic market need statement, interview prospective customers to learn their interest and willingness to spend money on the product that you propose. It would be expected that this potential customer interviews would cause you to refine (and possibly completely redesign) the envisioned product.

- Build a baby/child car seat alarm to notify parent that a child has been left in the car. The device should operate seamlessly to the individual's normal schedule and must operate independently of any external network connectivity. One possible suggestion is to have a sensor inline with a seat belt buckle that synchronizes with a Bluetooth alarm device that is carried with the vehicle key.

Each of these projects ties into one of my main research areas. You would be joining a team of students studying these, and related, problems. It is most likely that you will also be preparing and submitting manuscripts for publication from the work you would perform on these projects.

- A UC faculty member from Environmental Health program at UC Medicine is interested in industrial training using augmented reality. This project would be to develop prototype augmented reality experiences with a MicroSoft HoloLens.

- Topological Data Analysis (TDA) is a method of data analysis that derives characteristics of data by the
n-dimensional shape of the data. Basically it applies the field of mathematics called topology to data analysis.
Unfortunately, TDA (or more precisely the computation of persistent homology) has exponential complexity in
both time and space. Hence, the application of TDA is limited to around 10K points in ℝ
^{3}. Several of my students and I have been developing methods to develop the theories and algorithms to partition much larger, high-dimensional data sets so that they can be analyzed using TDA techniques. Toward this end, we have developed a Light-weight Homology Framework containing the software to support our studies. Numerous parts of these algorithms are reasonably well suited to computation in a GPGPU. Hence this senior project is to develop Cuda-based codes of our algorithms for execution on an Nvidia GPU. - I have a project that uses randomized and approximate methods for high-performance clustering of high-dimensional data. This project, called RPHash, combines random projection hashing, locality sensitive hashing, and a count-min sketch for clustering high-dimensional data (don't worry if these terms have no meaning to you, you can always quickly get up to speed on them). In this year's senior project I would like to focus on the Locality Sensitive Hashing (LSH) component of the software. The family of LSH hashing algorithms will hash a n-dimensional key to a m-dimensional (m < n) hash that (mostly) preserves the euclidean distance (to the origin). Basically the LSH algorithms map regions in the high dimensional space into similar (but smaller) regions in the smaller dimensional space. The problem with these algorithms is that if we are attempting to cluster data that is densely located within one or two of the LSH regions it will not be adequately partitioned. Fortunately there is a class of LSH algorithms called Adaptive LSH. Unfortunately their operation requires two passes over the data. In this senior project, I would like to explore some ideas I have for an online, one-pass Adaptive LSH algorithms.

- One of my main research areas is in parallel and distributed simulation using an optimistic synchronization strategy called the Time Warp Mechanism. As part of these studies, my students and I have developed a general purpose discrete event simulation kernel and modeling API, called WARPED2, that is configurable for sequential or parallel execution and it includes many configuration variables to turn on/off various optimizations/sub-algorithms for the Time Warp setup. The WARPED2 parallel simulation engine has been designed to have support for multi-threaded and task level parallelism for execution on multi-core and many-core Beowulf Clusters. This project would build new and, ideally, scalable discrete-event simulation models to help exercise the simulation kernel. While I am willing to entertain ideas on what types of models you would develop, one possibility would be ocean currents and the accumulation of debris in various regions based on the current flow and possibly weather patterns. In some cases these projects may require an out-reach to other researchers in the world that are experts in the fields we are attempting to model.
- As a complimentary project to that given above, I have developed a methodology and collection of tools to capture profile data about a discrete-event simulation model. This project is called DESMetrics. We have captured simulation data from over 22 different simulation models from 6 different simulation engines. Most recently we received some very large trace data sets (80GB and 440GB) from Germany that we cannot possibly analyze with the current in-memory techniques. An attempt to build/run an out-of-core solution proved to be infeasible due to excessive run times. Furthermore with these very large trace files it is reasonable to expect that the run time characteristics of the simulation models might change over time. Hence I have initiated sampling techniques that extract sub-traces from the original trace for analysis. These tools need to be extended and parallelized for higher performance. This senior project would expand this project and the associated software tools. We will also attempt to analyze the simulation models themselves to see if we can explore more exotic optimizations/configurations that incorporate causality relaxation techniques similar to "lazy reevaluation" (yes i will have to explain this idea in person).