Welcome to the Hare Research Group
We carry out research in a wide range of topics including programming language design, implementation, analysis, transformation, parallelisation and the specification, design, analysis and verification of both software and hardware systems, approximate string matching, complexity theory, external memory data structures, data streaming algorithms, randomised algorithms, and realtime and online algorithms.
Hare Research Group Administrator: Clare Williams
I am currently pursuing a PhD in Computer Science at the University of Bristol and working on “Fault Tolerant and Energy Efficient Wireless Sensor Networks” under the supervision of Prof. Dhiraj Pradhan.
Prior the PhD, I completed an MSc in Computer Science at The University of The West of England, Bristol. In my MSc thesis I studied the performance of four routing protocols that were designed for Ad Hoc networks through simulating them using the NS-2 Simulator.
Fault-Tolerant and low power system designs, PhD thesis related
Using fault tolerant techniques we achieve high reliability and soft-error tolerance in different circuits like memories and combinational logic. The additional circuitry should consume less power than other classical techniques for the improvement of power consumption.
Yield Improvement techniques, PhD thesis related
Due to technology shrinkage, permanent faults are more common. We are working towards design techniques that improve the yield and decrease the manufacturing cost for different kind of chips, e.g. memories.
Software fault tolerance
Hardware structures have been developed which can 'tolerate' faults, either transient or permanent. However, hardware failures are only one source of unreliability of the circuits. The other source of unreliability is software faults, which have become increasingly prevalent with the steadily increasing size and complexity of software systems. We are working towards this direction and improve system's reliability by increasing software's reliability.
The primary technological motivation for this research is the current power crisis in embedded systems. Limitations on battery life and ever increasing computational demands on mobile computing result in a dramatic power mismatch in today’s technology. Our group here has wide expertise in all relevant fields. We are also investigating various verification and synthesis issues, in VLSI circuit and system architecture and design. These diverse skills will be applied to investigating and defining the architecture of a new adaptive system on chip platform capable of implementing any electronic system. We seek to exploit our recent findings in low power, so that they will, to a greater extent, enhance the performance of the system compared with more 'conventional' techniques used today. To this end, our research will pave the way for the application new techniques and algorithms for the design of adaptive systems are a viable and more attractive design alternative.
Fault-Tolerance Issues in Sensor Networks
Recently we have been investigating design of fault-tolerant wireless sensor network. Wireless sensor networks are often unattended, autonomous systems with severe energy constraints and low-end individual nodes with limited reliability. We are developing a framework to detect and diagnose faults. Also being studied new reconfiguration and recovery algorithms.
Decision Diagrams for Verification and Synthesis
Having matured over the years, formal design verification methods, such as model checking and equivalence checks have found increasing applications in industry. We are formulating canonical representations based on Galois fields. These representations are not only more efficient for verification but admit decomposition techniques that lead to low power designs as well as area efficient designs.
The cost of manufacturing testing is becoming a major challenge with shrinking device dimensions and process variability. The challenges also include increasing complexity with limited access to internal points. Our research focus on novel use of Built In Self Test methods and Design for Testability techniques.
I am Jawar Singh, currently working towards of my PhD (3rd Year) in the Department of Computer Science and Engineering, University of Bristol, UK. I have a very sound academic track and background of Electronics and Electrical Engineering. I did my graduation (B.Tech., 4 yrs) and post graduation (M.Tech., 2 yrs) from Indian Institute of Technology (IIT) Roorkee, India in year 1999 and 2001 respectively.
The “MEMORY” a very central term either it is a machine or mankind. It is no exaggeration to say that both can be well recognized from it and how they are equipped with it to create a center of attention. That’s all about the theme of my research work “the static random access MEMORY”. The ultimate aim of my research work is to design and develop a memory cell (system) that should meet all the increasing stress of this fascinating world, such as growing demand of capacity, throughput, process variation tolerant and fault tolerant. The growing demand of capacity can be fulfill by putting large number of devices with shrinkage in feature size into a single die. The aggressive scaling (shrinkage in feature size) posses a number of challenges to embedded “MEMORY” in the current technology node of 65nm and 45nm. For Instance, smaller feature sizes imply greater impact of process variations, more susceptible to faults, increased area overhead and power hungry.
Currently I am focusing towards the low power process variation tolerant memory cell designs. I have proposed a couple of SRAM memory cell designs. My SRAM cell designs are more towards the single ended read/write and word-oriented (cells are not interleaved). The proposed 6T and 7T SRAM cells designs are rigorously analyzed with validated through simulations. The simulation results reveal that the proposed designs have better immunity to process variations, sufficient noise margin and low area overhead with some trade-off between power and performance. I am also working towards the parallelism and pipelined architecture of the embedded memories for multimedia and communication applications in which high throughput is needed.
Ilango Sriram is a PhD student at the University of Bristol who has a Master’s degree in Computer Science from the Technical University of Munich (Germany). He spent a year as a research associate at the Hewlett-Packard Research Labs in Bristol, UK, where he worked on automated mapping of business processes to application and infrastructure configuration, and explored ways of monitoring and managing virtualised infrastructures in data centres. He started his PhD, fully sponsored by HP Labs, in Oct. 2007 and is looking into ways of understanding and modelling increasing complexity and dynamics in future generations of data centres.
Mohamed A. Salem is a PhD student under the supervision of Dr. Kerstin I. Eder . Mohamed's research efforts are addressing the critical problem of design verification closure in complex electronic integrated circuits. His PhD is entitled "Functional Verification Coverage Closure". Mohamed is a part time student as he is currently the Engineering Manager of the Design Verification IP product line at the Design Verification and Test division of Mentor Graphics Corporation. Mohamed is industrially supervised by Harry Foster who is Mentor's Chief Verification Scientist.
Using the Event-B Formal Method to develop robust instruction set architectures.
Fangfang Yuan, a second-year PhD student supervised by Kerstin Eder. My research area is design verification. I am currently investigating how the design decisions affect verification effort. It is a very important and complex task to do functional verification on modern microprocessors. With the increasing demand of fast computation, a lot of design decisions have been made on the general purpose processors to make them solving specific problems, such as image processing, cryptography, etc. Some decisions make the processors more efficient, however, sometimes they also involve more verification effort. There exists several metrics to measure the completion of verification process, but there is no method to objectively demonstrate how the design decisions affect verification effort.
The design under verification is a Cryptographic processor, whose ISA has four sources and two destinations, and uses look up table doing logic operations. During my first year PhD study, I did some simulation-based experiments on the innovative processor and traditional processor. I measured the code coverage and the branch coverage under the tests with the same functionality. The result showed that the traditional processor had higher coverage under the same functionality tests. However, high coverage does not always mean good quality of verification. Verification also includes bug detection, etc. Now, I have been looking from the formal side to model the processor from specification to each bit pattern of the instructions by refinements.