This page provides current and archived information about the weekly research seminars held in the Department of Computer Science. The talks are intended to encourage interaction and exchange between the various research groups within the department and beyond.
Departmental Seminars take place in Room 1.06 MVB on Thursdays at 4pm and are an hour long, with presentations of 30-45 minutes and the remainder reserved for questions. These sessions are relatively informal.
With around 30 talks a year covering a wide spectrum of current and upcoming topics in computer science, it is an excellent opportunity to meet with other researchers at the department, to update yourself on current advances in various fields and to share your own research. Commonly 20-50 people attend the sessions. The seminars primarily target staff and postgraduates, however, any student interested is welcome to join the talks and discussions.
Computer Vision Group
University of Leeds
Shared Parts for Deformable Part-based Models|
The deformable part-based model (DPM) proposed by Felzenszwalb et al. has demonstrated state-of-the-art results in object localization. The model offers a high degree of learnt invariance by utilizing viewpoint-dependent mixture components and movable parts in each mixture component. One might hope to increase the accuracy of the DPM by increasing the number of mixture components and parts to give a more faithful model, but limited training data prevents this from being effective. We propose an extension to the DPM which allows for sharing of object part models among multiple mixture components as well as object classes. This results in more compact models and allows training examples to be shared by multiple components, ameliorating the effect of a limited size training set. We (i) reformulate the DPM to incorporate part sharing, and (ii) propose a novel energy function allowing for coupled training of mixture components and object classes. We report state-of-the-art results on the PASCAL VOC dataset.
The proposed approach belongs to a larger class of object detection models that commonly model the appearance of an object class by (i) employing latent variables, such as part positions and (ii) using a discriminative approach. By casting the learning problem as an energy minimization that assumes a given set of latent variables for the negative training instances, such models generally benefit from relatively strong convergence guarantees. In the talk we will introduce the energy function, the optimization scheme and the convergence guarantees of the proposed model.
Dr. Alexandros Stamatakis|
Heidelberg Institute for Theoretical Studies
Disentangling Evolution on Supercomputers|
New wet-lab DNA sequencing technologies have generated an unprecedented molecular data flood. Therefore, the application of High-Performance Computing (HPC) techniques for analyzing molecular data and the development of improved algorithms is becoming increasingly important in Bioinformatics.
In this talk, I will describe those challenges by example of phylogenetic inference under the Maximum Likelihood (ML) criterion which is a NP-hard problem. The main challenge lies in the optimization of the phylogenetic likelihood function.
Initially, I will address work on search algorithms, data structures, and convergence criteria for accelerating tree searches.
Another part of my work focuses on the application of HPC techniques to phylogenetic inference. I will review how the phylogenetic likelihood kernel can be mapped to a broad variety of parallel hardware architectures (e.g., GPUs, FPGAs, the IBM Cell, multi-core systems, compute clusters).
Finally, I will provide an overview of current and future collaborative large-scale analyses with Biologists that aim to elucidate various aspects of evolution with the final goal to reconstruct a comprehensive plant tree of life.
Visual Event Classification with Location-based Priors|
ABSTRACT: Visual event and activity classification has been mostly studied for cases when the camera is static and/or where the action is well centered and localized in the image. Less work has been concerned with the case of a moving camera, which is the situation in systems that are observing inside-out e.g. a wearable system. We present a method for visual classification of actions and events captured from an egocentric point of view. The method tackles the challenge of a moving camera by creating deformable graph models for classification of actions. Action models are learned from low resolution, roughly stabilized difference images acquired using a single monocular camera. In parallel, raw images from the camera are used to estimate the user’s location using a visual Simultaneous Localization and Mapping (SLAM) system. Action-location priors, learned using a labeled set of locations, further aid action classification and bring events into context. We present results on a dataset collected within a cluttered environment, consisting of routine manipulations performed on objects without tags.
Speed Scaling to Manage Temperature|
ABSTRACT: In this talk we consider the online speed scaling problem, where the quality of service objective is deadline feasibility and the power objective is to minimise the maximum temperature reached in the schedule. In the special case of batched jobs, where all jobs are released at time 0, we show a simple algorithm to compute the optimal schedule, first when the optimal maximum temperature is known, and then when it is unknown. For general online instances with arbitrary release dates, we give a new online algorithm and show the competitive ratio is an order of magnitude better than the previous best known algorithm. The talk will be aimed at a general audience, and no knowledge of speed scaling or online algorithms will be assumed.
Dr. Mark Witkowski||
An Approach to Robot Safety for Consumer Robotics|
ABSTRACT: One of the major stumbling blocks to the widespread application of robotic devices as a consumer product is the need to convince users of actual, and perceived, safe operation. While much can be done with conventional physical safety measures coupled to software verification techniques, this talk will argue that adaptive and learning robots operating in unpredictable environments will necessitate an extended approach to safety issues. We will consider the idea of "rational reconsideration", based on anticipating possible harmful outcomes of goals, plans and actions a robot could perform and their physical and social consequences. The talk will look at a number of ethical, legal and moral issues relating to safe operation in robots and present an implementation proposal based on the speaker's Behavioural Calculus.
Graphics Processor Architectures: tile-based deferred rendering with POWERVR|
ABSTRACT: POWERVR graphics technology is based on a concept called Tile Based Deferred Rendering (TBDR). In contrast to Immediate Mode Rendering (IMR) used by most graphics engines in the PC and games console worlds, TBDR focuses on minimising the processing required to render an image as early in the processing of a scene as possible, so that only the pixels that actually will be seen by the end user consume processing resources. This approach minimizes memory and power while improving processing throughput but it is more complex. Imagination Technologies has refined this challenging technology to the point where it dominates the mobile markets for 3D graphics rendering, backed up by an extensive patent portfolio. This talk will give an insight into the POWERVR graphics technology.
Dr. Julian Gough |
University of Bristol
Opportunities for computer scientists and electronics engineers in the coming age of personal medicine and genomics |
ABSTRACT: How will personal genome sequencing change the world ... maybe not how you expect! We have great expectations about what genome sequencing of individual humans will bring, but the expectations of past generations have often not been accurate; people dreamed of video-phones but what we actually wanted was text messages. What will the text message of human genome sequencing be?.
Dr. Nick Yeung |
University of Oxford
Decision processes in human performance monitoring|
ABSTRACT:The ability to monitor our actions, for example to know when those actions might be incorrect or inappropriate, is of obvious adaptive significance. EEG experiments have revealed reliable neural correlates of performance monitoring, including components associated specifically with incorrect responses. This error-related activity has been widely studied, but it remains unclear whether it reflects a precursor to error detection such as conflict monitoring, the error detection process itself, or subsequent reactions to a detected error. In my talk, I will describe a series of EEG studies that adopted a novel approach to this issue, treating error detection as a decision process based on variable evidence, and identifying neural activity associated with specific stages of this process. Our findings from conventional EEG approaches and multivariate analyses of single-trial data converge to suggest that error-related EEG activity reflects the early stages of decision processes in human performance monitoring. This is a special event: It begins at 2pm and takes place in the meeting room, MVB 3.36
Dr. Virendra Singh |
Indian Institute of Science (IISc), Bangalore
Energy-Efficient Fault Tolerant Microarchitecture for Chip Multiprocessors |
ABSTRACT:Relentless scaling of silicon fabrication technology coupled with lower design tolerances are making ICs increasing susceptible to wear-out related permanent faults as well as transient faults. A well known technique for tackling both transient and permanent faults is redundant execution, specifically space redundancy, wherein a program is executed redundantly on different processors, pipelines or functional units and the results are compared to detect faults. In this presentation, we describe a power-efficien architecture for redundant execution on chip multiprocessors (CMPs) which when coupled with our per-core dynamic voltage and frequency scaling (DVFS)algorithm significantly reduces the power overhead of redundant execution without sacrificing performance. Using cycle accurate simulation combined with an architectural power model we estimate that our architecture reduces dynamic power dissipation in the redundant core by an mean value of 76% with an associated mean performance overhead of only 1.2%. We also present an extension to our architecture that enables the use of cores with faulty functional units for redundant execution without a reduction in transient fault coverage. This extension enables the usage of faulty cores, thereby increasing yield and reliability with only a modest power-performance penalty over fault-free execution.
Prof. Jacob A. Abraham |
University of Texas at Austin
Variability in VLSI Design |
ABSTRACT:Process variations are inevitable as CMOS technology scales to nanoscale dimensions. This talk will describe the sources of variations and their effects on CMOS circuits. Techniques for generating tests to detect smalldelay changes due to variations as well as ways of applying the tests to manufactured chips in a cost-effective manner will be described.
Dr Sergey Frenkel |
Russian Academy of Sciences - Moscow
A Probabilistic Analysis of Erroneous Behavior Manifestation in a Network of FSMs |
ABSTRACT: Fault detection latency and self-healing phenomena are very considerable aspects of high-reliable systems design. The computation of both probability distribution function of fault detection latency and probability of self-healing ability needs in some joint consideration of the fault and fault-free system models at a given specific level of the system modeling. Finite state machine (FSM) is very popular model of a computer system behavior at rather high levels of the system design. A notion of a product of the models of fault-free and faulty FSMs can be convenient for testability and fault-tolerant features modeling. However, currently this model was applied the only to a single FSM. In this talk some approaches to analysis of fault detection latency for FSM decomposed into a network of smaller component FSMs, are outlined. A possibility to use the Markov chain model for its fault detection latency and self-healing analysis is shown. These models can be a base of a tool of fault-tolerant systems design, namely in order to compute the probabilistic distribution functions of both fault detection latency (for permanent faults) and time to self-healing in presence of some transient faults (Soft Upset Errors in particular).
Markus Jalsenius ||
Flood-It: The colourful game of board domination |
ABSTRACT: This talk is about the popular one player combinatorial game known as Flood-It. In this game the player is given an n by n board of tiles where each tile is allocated one of c colours. The goal is to make the colours of all tiles equal via the shortest possible sequence of flooding operations. In the standard version, a flooding operation consists of the player choosing a colour k, which then changes the colour of all the tiles in the monochromatic region connected to the top left tile to k. After this operation has been performed, neighbouring regions which are already of the chosen colour k will then also become connected, thereby extending the monochromatic region of the board. David Arthur, Raphael Clifford, Ashley Montanaro, Benjamin Sach and I have analysed the game, leading to quite a few interesting results that I am going to present. For three or more colours, the game is indeed NP-hard, but NP-hardness is only one of several results that will be presented. Our work has been presented at BCTSC (British Colloquium for Theoretical Computer Science ) earlier this year and will formally be presented at the International Conference on Fun with Algorithms in June 2010. As a footnote, our work has even appeared on Slashdot:http://games.slashdot.org/story/10/04/09/134251/All-the-Best-Games-May-Be-NP-Hard To try out the game, see for instance http://apps.yahoo.com/-Mvp8tE30/ .
Manuel Oriol |
York Extendible Testing Infrastructure (YETI) |
ABSTRACT:This presentation is about the York Extendible Testing Infrastructure (YETI), a random testing tool implemented in Java that allows the testing of code written in multiple programming languages (currently Java, JML and .NET). YETI provides a strong decoupling between the strategies and the actual language binding. The tool exhibits unparalleled performances with around 106 calls per minute on Java code. It also benefits from a graphical user interface that allows test engineers to orient the testing process while testing. We illustrate the efficiency of such a tool with a study testing all classes in java.lang and some classes in a well known open source project (iText).
Dr. Martin Frith, |
How to compare gigabases of repeat-rich DNA sequence |
ABSTRACT: The main way of analyzing biological sequences is by comparing and aligning them to each other. It remains difficult, however, to compare modern multi-billion-base DNA datasets. All previous methods either cannot tolerate highly skewed (oligo)nucleotide composition (e.g. BLAST), or cannot find weak similarities (e.g. DNA read mappers). Unfortunately, genomes are highly skewed, and weak similarities are interesting. I will describe a BLAST-like method that overcomes these limitations. This method unifies the techniques of suffix arrays, spaced seeds, and subset seeds. This enables us to: compare sequences without masking repeats, compare AT-rich genomes such as malaria, and map DNA reads with many sequence differences.
Dima Damen |
Activity Analysis: Finding Explanations for Sets of Events |
ABSTRACT: Automatic activity recognition is the computational process of analysing visual input and reasoning about detections to understand the performed events. In all but the simplest scenarios, an activity involves multiple interleaved events, some related and others independent. The activity in a car park or at a playground would typically include many events. This research assumes the possible events and any constraints between the events can be defined for the given scene. Analysing the activity should thus recognise a complete and consistent set of events; this is referred to as a global explanation of the activity. By seeking a global explanation that satisfies the activitys constraints, infeasible interpretations can be avoided, and ambiguous observations may be resolved. An activitys events and any natural constraints are defined using a grammar formalism. Attribute Multiset Grammars (AMG) are chosen because they allow defining hierarchies, as well as attribute rules and constraints. When used for recognition, detectors are employed to gather a set of detections. Parsing the set of detections by the AMG provides a global explanation. To find the best parse tree given a set of detections, a Bayesian network models the probability distribution over the space of possible parse trees. Heuristic and exhaustive search techniques are proposed to find the maximum a posteriori global explanation. The framework is tested for two activities: the activity in a bicycle rack, and around a building entrance. The first case study involves people locking bicycles onto a bicycle rack and picking them up later. The best global explanation for all detections gathered during the day resolves local ambiguities from occlusion or clutter. Intensive testing on 5 full days proved global analysis achieves higher recognition rates. The second case study tracks people and any objects they are carrying as they enter and exit a building entrance. A complete sequence of the person entering and exiting multiple times is recovered by the global explanation.
Wayne Hayes |
University of California, Irvine
Dynamical Grammars for Galaxy Image Recognition |
ABSTRACT: The Sloan Digital Sky Survey (SDSS) contains an estimated 1 million galaxy images. The Large Synoptic Survey Telescope (LSST) is being built and will scan the entire sky repeatedly, providing images of millions of galaxies and petabytes of data every night. The SuperNova Acceleration Probe (SNAP) is a proposed orbiting satellite that will repeatedly map the entire sky from oribit, providing images of perhaps billions of galaxies. Unfortunately, given an image of a spiral galaxy, there does not exist an automated vision algorithm to even tell us which direction the spiral arms wind, much less count them or provide any other quantitative information about them. To wit, the most advanced galaxy classification project is the Galaxy Zoo, in which thousands of human volunteers classify images by eye over the web. Although valuable, such human classifications will (a) provide only limited objective quantitative measurements, and (b) soon be overwhelmed with more data than humans can handle. However, such information would prove an invaluable source for astronomers and cosmologists to test current theories of galaxy formation and cosmic evolution (which can now be simulated with high accuracy on large computers, producing copious predictions that cannot be tested due to a lack of quantitative observational data). In this talk, I will report on preliminary results for using dynamical grammars and other machine learning and vision techniques to "parse" images of galaxies, starting us on the road towards producing quantitative data that will be useful for astronomers to test scientific theories. This work is in collaboration with Prof. Eric Mjolsness and Mr. Darren Davis (Ph.D. Candidate), both of UC Irvine.
Gary MORTON |
Design For Test - Why is it necessary? |
ABSTRACT: Some surveys suggest that the costs of testing silicon chips are actually more than the cost of design and manufacture. This talk explains why it is necessary to test chips, covers DFT techniques used in testing chips, manufacturing processes, in order to give an insight as to where these costs come from, and how DFT might reduce these costs.
Ashley Montanaro |
Quantum search with advice |
ABSTRACT: One of the most famous results in the field of quantum computation is that quantum computers can search an unstructured list for a special "marked" item more efficiently than classical computers. However, in realistic search problems, there is often some prior information ("advice") about the location of the marked item. This can be used to guide the search process, and sometimes obtain much better performance than naive unstructured quantum search. In this talk, I will discuss new quantum algorithms that take advantage of such advice, given in the form of a probability distribution. The algorithms can achieve significant speed-ups over any possible classical algorithm. In some cases, exponential speed-ups (on average) can be achieved.
Aram Harrow |
Quantum algorithm for solving linear systems of equations |
ABSTRACT: Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b, find a vector x such that Ax=b. We consider the case where one doesn't need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse, N by N and has condition number kappa, classical algorithms can find x and estimate x'Mx in O(N sqrt(kappa)) time. Here, we exhibit a quantum algorithm for this task that runs in poly(log N, kappa) time, an exponential improvement over the best classical algorithm. This talk is based on arXiv:0811.3171v3, which is joint work with Avinatan Hassidim and Seth Lloyd.
Xiaosong Wang ||
Automatic quality improvement of archive film and video |
ABSTRACT: In this talk, An ongoing research project for building an automated archive film restoration system will be discussed. This system is composed of mainly two modules, e.g. defect detection and defect removal. First, a novel probabilistic approach will be described for defect detections, by combining temporal and spatial information across a number of frames. Then, a joint framework is introduced for image sequence restoration and motion correction based on the defect maps produced from the first module. Eventually, the proposed methods are compared against state-of-the-art and industry-standard methods to demonstrate its superior accuracy on restoring real degraded data.
Paul Rosin |
Creating Realistic 2D & 3D Facial Animations |
ABSTRACT: This talk will describe ongoing work at Cardiff University for building photo-realistic models of faces using active appearance models (AAMs) applied to both 2D image data and textured 3D mesh data. We have applied these models in a variety of contexts: 1/ speech driven animation, in which a combined audio and image model is built and new unseen audio is used to synthesise appropriate an image sequence, 2/ performance-driven animation, in which the animation parameters analysed from a video performance of one person are used to animate the facial model of another person, 3/ production of stimuli for psychological experiments to determine the human perception and judgement of the facial dynamics of smiles, and 4/ biometrics, in which people are identified based on their facial dynamics captured during an utterance.
Margarita Chli |
Imperial College London
Applying Information Theory to Efficient Simultaneous Localisation And Mapping (SLAM)|
ABSTRACT: The success of vision-based SLAM is to be accredited to the richness of information encoded in images which however, impedes online performance. The biggest challenge in current estimation algorithms lies in achieving a balance between two competing goals: the optimisation of time complexity and the preservation of the desired precision levels. The key is in *agile manipulation of data*, which is the main idea explored in this talk. Exploiting the power of probabilistic priors in sequential tracking, we investigate the *information* encoded in measurements and estimates, which provides a deep understanding of the map structure as perceived through the camera lens. Employing Information Theoretic principles to guide the decisions made throughout the estimation process we demonstrate how this methodology can give rise to efficient and dynamic algorithms for quality map-partitioning and robust feature matching in the presence of significant ambiguity and variable camera dynamics.
Prof. Stephen Jarvis|
University of Warwick
Assessing Future Computing Requirements using Application Performance Modelling|
ABSTRACT: In June 2001 the Advisory Council for Aeronautical Research in Europe (ACARE) was set up in response to a Europe-wide review of the medium- and long-term goals of the aviation industry. The focus of ACARE was to set a strategic research agenda aimed at meeting the environmental challenges set out in the European Aeronautics Vision for 2020. As a result, Rolls Royce (RR) and other companies in the aeronautical industry were faced with challenges including reducing fuel consumption and CO2 emissions by 50%, reducing external noise by 50%, reducing NOX emissions by 80% and reducing the environmental impact of manufacture, maintenance and disposal of aircraft-related products. At the forefront of responding to ACAREs pan-European research challenge is the ability of companies such as RR to investigate, through high-performance computing (HPC)-based simulations, innovative methods of design and operability of aircraft products. These complex multi-physics and engineering simulations, even in their simplest form, provide insights into aspects of aircraft which could not otherwise be achieved in the absence of physical testing. HPC-based simulation is essential to RR, and the scientific enquiry that is achievable is dictated by the combination of simulation code and computing architecture in place to support this activity. Should RRs in-house simulation codes become unusable on some future computing architecture because they do not scale, then RR stand to loose significant investment.In this talk I describe how it is possible to assess and realize new routes for simulation code development and deployment through application performance modelling. I will show how future application/architecture combinations can be explored in a mathematical or simulated setting, thus enabling hypothetical questions relating to the configuration of a future architecture to be assessed in terms of their impact on key scientific codes. I will provide examples of where this work has been applied, including input/code optimization, scheduling, scaling studies, post-installation performance verification, and in HPC procurement.
|November 26||Tilo Burghardt||
Dirty Vision: Hunting & Gathering of Biometric Uniqueness in Natural Imagery|
ABSTRACT: This talk will focus on automated content interpretation in images taken in natural environments. First, the motivations, objectives and difficult/interesting caveats of the subject will be explored. This introductory part will be followed by a more in-depth discussion on a number of visual recognition concepts that add robustness to dealing with corrupted/ altered data. Special consideration will be given to non-rigid pattern distortions, partial occlusions, variable lighting and pattern-alteration (e.g.dirt). In order to illustrate the approaches outlined, the problem of matching corrupted landmark patterns will be employed as a case study. Finally, a number of applications, results and future questions related to visual animal biometrics/bio-demographics will be presented. Where possible,methods will be related to more general computing or pattern recognition problems.
Understanding online trust and credibility|
ABSTRACT: The first objective is to define these concepts and discuss the different factors that affect perceived trust and credibility online. The main application areas are electronic commerce and social networking websites. The second objective is to talk about the implications for user interface and user experience design. Some guidelines and tools will be presented, to help designers maximise the perceived trustworthiness of their online environments.
Visual Effects: From Film to Real-Time|
ABSTRACT: This talk will cover recent trends and techniques in state-of-the-art feature film visual effects. As a recent case study, I will talk about the “Sandstorm” effects system which was developed at Sony Imageworks for Spider-Man 3. Sandstorm was used to model, animate, simulate, and render hundreds of millions of sand grains per frame to create the Sandman character. Sandstorm was designed to push the limits of visual fidelity, and therefore complex tasks took several hours of computation on high-end CPUs. The enormous gains in computational throughput afforded by modern programmable GPUs allows for many of these techniques to be run interactively on consumer-level GPUs. I will present recent work at NVIDIA developing fluid simulation, volume rendering, and ray tracing systems that deliver quality comparable to that seen in feature films, but at interactive rates. The talk will conclude with a discussion the issue of convergence between films and video games, and important research topics to the future of both.
|October 01||Jawar Singh||
Low Power Process Variation Aware SRAM Bitcell Designs|
ABSTRACT: Static Random Access Memories (SRAMs) are widely known for implementation of high performance caches and also in system on chip (SoC) products. An SRAM array is composed of millions of identical bicells, where each bitcell holds one bit of information. A 6-transistor SRAM bitcell is usually the first choice for implementation of an SRAM array. In nano-regime, 6T SRAM bitcell with minimum feature sized devices limit its viability because of susceptibility to process variation and parametric failures. Moreover, in energy constraint and reliable applications reduced supply or sub-threshold operation further exacerbate this problem. Therefore, recently several SRAM bitcell designs have been proposed for the target low power energy constraint applications, compatible with sub-threshold logic operations. In these designs most of the bitcells employ extra number of transistors or upsized devices to provide sub-threshold operation compatibility and robustness to the process variation. CMOS technology scaling induced side effects such as process variation in the SRAM bitcells will be the focus of this talk. A comprehensive study of the existing SRAM designs and proposed SRAM designs will be presented. Also how the emerging devices such as Si-Tunnel FETs can be modeled for circuit level simulations. I will show how a behavioral model for TFET devices accurately captures the device physics and later it is used for SRAM bitcell simulations.
University of California, Santa Cruz
Learning Task-specific Object Location Predictors with Boosting and Grammar-guided Feature Extraction|
ABSTRACT: Beamer is a new system for unstructured object detection: from an input image, it emits a list of (x,y) pairs, which are the predicted locations of objects. The system shows excellent results in the presence of noisy and ambiguous greyscale aerial imagery, and I describe key elements of the approach to achieve these results.
Brno University of Technology
Video processing and TRECVID (3.36 MVB @ 1600)|
ABSTRACT: Overview of TRECVID video processing tasks, feature extraction for video processing, estimation of similarity of frames and video sequences, copy detection, video summarization, applications, Brno participation on NIST TRECVID evaluations, conclusions.
Brno University of Technology
Hologram and optical field synthesis (3.36 MVB @ 1500)|
ABSTRACT: Overview of the principles used in holography and optical fields, optical field and computer graphics approaches, optical field of point light source, acceleration of optical field synthesis, precision issues, error metric, data width and accelerated implementation of optical field synthesis, conclusions.
University of California, Berkeley
Design Principles for Visual Communication|
ABSTRACT: Effective visualizations can help analysts rapidly find patterns lurking within large data sets and they can help audiences quickly understand complex ideas. Yet, even with the aid of computers, hand-designing effective visualizations is time-consuming and requires considerable human effort. The challenge is to develop new algorithms and user interfaces that facilitate visual communication by making it fast and easy to generate compelling visual content. Skilled human designers use a variety of design principles to improve the perception, cognition and communicative intent of an image. In this talk I'll describe techniques for identifying the appropriate design principles within specific domains. For each domain I'll show how to algorithmically instantiate design principles within an automated design system or an interactive design tool.
Statistical Regularities in Low and High Dynamic Range Images|
ABSTRACT: An important step for better understanding the human visual system is understanding the input that it encounters. Previous work has focused on statistical analysis of conventional images (LDR) and has discovered a number of interesting regularities. High dynamic range (HDR) imaging provides a more accurate way of capturing the luminance of real scenes. We analyse natural and manmade scenes using a variety of statistical tools and compare our findings for both HDR and LDR.
José Martínez Carranza||
Efficient Visual SLAM Using Planar Features|
ABSTRACT: Point based visual SLAM suffers from a trade off between map density and computational efficiency. With too few mapped points, tracking range is restricted and resistance to occlusion is reduced, whilst expanding the map to give dense representation significantly increases computation. We address this by introducing higher order structure into the map using planar features. The parameterisation of structure allows frame by frame adaptation of measurements according to visibility criteria, increasing the map density without increasing computational load. This facilitates robust camera tracking over wide changes in viewpoint at significantly reduced computational cost. Results of real-time experiments with a hand-held camera demonstrate the effectiveness of the approach.
Prof. Maciej Ciesielski|
University of Massachusetts
Taylor Expansion Diagrams for Word Level Logic Synthesis (Slides PDF 920kB)|
ABSTRACT: This talk introduces Taylor Expansion Diagrams (TED), a canonical, graph-based representation for dataflow and computation-intensive designs, such as digital signal processing and computer graphics. TED representation is based on a word-level, rather than binary decomposition principle, which makes it possible to represent designs on higher levels of abstraction. Owing to their canonical property, TEDs can be used for equivalence checking of designs specified on algorithmic or behavioural levels, written in C, system C or behavioural HDL. In addition to formal verification, TED representation can be used for synthesis and high-level transformations of designs specified on behavioural level.
So I've optimized my ISA for performance - will it cost me forever to verify it?|
ABSTRACT: As design complexity rapidly grows, verification consumes an ever increasing part of the time invested into the design of a system. In the hardware community up to 70% of the design effort have been reportedly spent on verification. Design decisions are made to optimize architectures in order to gain high performance or reduce power consumption. But the impact of design decisions on verification is rarely considered when these decisions are being made. We borrow the tricks from investigating design power consumptions, and transfer the method to make design decision verification-aware-able. In this talk, I will introduce the formal framework used to extract desired properties of a design, the language and platform of building the formal model. Last but not least, a case study will be presented to demonstrate such framework.
Alejo J Nevado||
Mathematical Models and Computer Simulations to Study the Brain|
ABSTRACT: Computational neuroscience is a interdisciplinary science that links together, among others, the fields of mathematics, computer science, physics and neuroscience. The recent success of the application of common engineering tools to unraveling some outstanding brain's mysteries has elicited high interest in the area. In this talk, the main computational models used in this field are presented, followed by an explanation of how classical dynamical systems theory and numerical analysis techniques can be applied to their study. Finally an example of such an study is briefly presented.
University of British Columbia
Visible Light Tomography in Computer Graphics|
ABSTRACT: Tomographic methods are the standard approach for obtaining volumetric measurements in medicine, science, and engineering. Typical tomography setups acquire 2D X-ray images of an object, and reconstruct a 3D voxel representation from this data. Unfortunately, for many applications in computer graphics, such X-ray setups are not feasible due to cost and/or safety concerns. In this presentation, I will introduce our recent work on visible light tomography, which has much more modest hardware requirements. I will discuss tomographic methods in the presence of refraction, and show applications to the scanning of transparent objects, and the capture of gas flows.
ABSTRACT: First of all we define the generalised function matching (GFM) and generalised parameterised matching (GPM). We give NP-Completeness results for several versions of these problems. We then introduce a related problem we term "pattern matching under string classes" which we show to be solvable efficiently. We discuss an optimisation variant of generalised matching and give a polynomial time sqrt(OPT/k)-approximation algorithm for ﬁxed k. Finally, we define another optimisation version and we show using Kolmogorov complexity that, on average, the naive solution is the best approximation we can achieve.
University of Cambridge
Communication - The next resource war for parallel microprocessors|
ABSTRACT: Scaling of electronics technology has brought us to a pivotal point in the design of computational devices. Technology scaling favours transistors over wires which has led us into an era where communication takes more time and consumes more power than the computation itself. I will discuss how this technology driver inevitably pushes us toward a communication-centric approach to computer systems design, from computer architectures through to algorithm design.
IBM Research Labs,
Causality and Formal Verification|
ABSTRACT: The formal definition of causality by Halpern and Pearl extends straightforward counterfactual causality. Causality and its quantitative measure responsibility allow us to compute causes for events and the exact responsibility of each cause for the event. As we will see in several examples, Halpern and Pearl's definition matches our intuition about what should be considered a cause. It also unexpectedly comes in handy in formal verification. Formal verification is the mathematical method of checking the correctness of computerized systems by automatically proving their correctness with respect to a specification expressed in some formal way (for example, as a temporal logic formula). Many areas in formal verification, such as coverage estimation, vacuity detection, symbolic trajectory evaluation (STE), and counterexamples explanation can be viewed in the framework of causality. Moreover, in these areas, causality and responsibility give additional insights into the problem and allow us to produce more accurate solutions.
Prof N Balakrishnan|
Indian Institute of Science,
|Information Security Scenario in India|
Applications of statistics and machine learning in the process of silicon design|
ABSTRACT: The application of statistics to the process of silicon manufacture has been immensely successful. There is, however, little industrial application of these techniques in the design process. A modern silicon design environment has many similarities to a software design environment but is in some ways more tractable because of the constraints imposed by physical realisation of the design. There are many opportunities for mining data, recognising patterns and reasoning about uncertainty. The aim of this seminar will be to introduce a number of areas which, prima facie, are good candidates for new applications. These are research areas which I will be pursuing during some part-time research in the university.
University of Reading
Polytopes and Polyhedra: from Hardware Compilation to PetaScale Parallelisation|
ABSTRACT: Virtually every controllable object has some form of computing device which performs a variety of tasks with little human intervention. The number of powerful embedded computing devices is set to exceed the number of standard desktop processors by more than a factor of three and this trend is expected to continue. At the high-end of the performance spectrum, an ExaScale system presents a formidable billion-way concurrency arising from millions of cores, each with thousands of threads. This exponential growth in the use of computing technology is a result of continuing demands for higher performance in all areas of computing -- from embedded computing applications (from web-based information appliances to space technology) to a new wave of super-computing grand challenge applications -- Peta-Flop systems for Bioinformatics. Parallel computing is also gaining mainstream status as evidenced by recent developments in multi-core “desktop Supercomputers”. Parallel computing, once considered a niche applications technology, is therefore set to play a critical role in realizing a "pervasive" computing goal. These advances have generated a new impetus in the high level approaches to high performance system design – both hardware and software. I will present an overview of three threads of research in the context of high performance systems that are underpinned by a common "semantic" framework of polytopes/polyhedra. Depending on the nature of the application the framework may be interpreted as transformation of specifications, compilation of (parallel) programs or synthesis of architectures. I will briefly highlight some current work on applying these programming models to the design of “software at scale” in the PetaScale paradigm.
Using Low Level Motion to Estimate Human Pose|
ABSTRACT: There is currently much interest within the computer vision community to estimate human pose from single or multiple cameras without the need to attach markers to the subject being observed. Despite this interest the problem still remains unsolved due largely to the many degrees of freedom a human body contains and ambiguities in observational data. Whilst current approaches to solve this problem are dependent on exploiting appearance we present a system that is entirely dependent on motion cues alone. In this seminar we will discuss some of the progress made by the community over the last decade, how our work fits in with this and some of the potential applications of such a system.
Paul Morrissey||Modular Security Analysis of the TLS Handshake Protocol|
Pseudo-randomness in a quantum world|
ABSTRACT: Pseudo-randomness is a subject of central importance in classical computer science. In this talk, I will introduce notions of pseudo-randomness useful in quantum algorithms, motivated by examples. All the quantum mechanics required will be introduced as it is needed.
Microarchitectural Support for Security|
ABSTRACT: Power analysis attacks are a very powerful cryptanalytic technique that exploits side-channel information. For instance, these attacks have been used in the past to extract secret key information from smart cards. In order to counteract such power analysis attacks at a hardware level the idea of a non-deterministic processor has been proposed some years back but the whole concept has never been evaluated in practice. In my talk I will explain how a power analysis attack works in general and I give a brief overview on common countermeasures. Then I will discuss the architecture of our non-deterministic processor implementation and show how such a design can weaken a power analysis attack in "real life".
Optimal integration of biologically plausible decision making and reinforcement learning|
ABSTRACT: Decision making and reinforcement learning in animals are both known to involve the same subcortical brain structure called the basal ganglia. Well established models exists for both, but little work has been done in integrating the two into a combined model. This talk looks at how these two aspects of the basal ganglia can be implemented simultaneously in a biologically plausible way.
of Information Technology
Feature Selection in Taxonomies with Applications to Paleontology|
ABSTRACT: Taxonomies for a set of features occur in many real-world domains. An example is provided by paleontology, where the task is to determine the age of a fossil site on the basis of the taxa that have been found in it. As the fossil record is very noisy and there are lots of gaps in it, the challenge is to consider taxa at a suitable level of aggregation: species, genus, family, etc. For example, some species can be very suitable as features for the age prediction task, while for other parts of the taxonomy it would be better to use genus level or even higher levels of the hierarchy. A default choice is to select a fixed level (typically species or genus); this misses the potential gain of choosing the proper level for sets of species separately. Motivated by this application we study the problem of selecting an antichain from a taxonomy that covers all leaves and helps to predict better a specified target variable. Our experiments on paleontological data show that choosing antichains leads to better predictions than fixing specific levels of the taxonomy beforehand.
Discovering Higher Level Structure in Real-Time Visual SLAM|
ABSTRACT: Recent advances in visual simultaneous localisation and mapping (SLAM) make it an appropriate technology for a wide range of moving platforms where the main mapping sensor is a single camera, such as small domestic robots and personal wearable devices. However, maps still generally consist of sparse sets of features which presents problems for applications which require interaction with the environment. In this talk, a technique is presented for automatic discovery and incorporation of higher level structure in the map which can reduce redundancy in the state space and provide a richer representation of the surrounding environment. The approach is demonstrated in a real-time system operating with a hand-held camera in a small office environment.
David Coulthurst||Global Illumination on SIMD Processor Architectures|
Pete Trimmer||Evolving Optimal Decisions|
Dr Seth Bullock|
University of Southampton
The Lure of Artificial Worlds: Simulations vs. Experiments|
ABSTRACT: For practitioners across a growing number of academic disciplines there is a strong sense that simulation models of complex realworld systems provide something that differs fundamentally from that which is offered by mathematical models of the same phenomena. The precise nature of this difference has been difficult to isolate and explain, but, occasionally, it is cashed out in terms of an ability to use simulations to perform "experiments". The notion here is that empirical data derived from costly experiments in the real world might usefully be augmented with data harvested from the right kind of simulation models. We will reserve the term "artificial worlds" for such simulations. In this paper, rather than tackle the problems inherent in this type of claim head on, we will approach them obliquely by asking: what is the root of the attraction of constructing and exploring artificial worlds? By combining insights drawn from the work of Levins, Braitenberg, and Clark, we arrive at an answer that at least partially legitimises artificial worlds by allocating them a useful scientific role, without having to assign the status of empirical enquiry to their exploration.
Andrew Moss||Program Interpolation|
New Horizons For HCI|
ABSTRACT: HCI is experiencing a renaissance. No longer only about being user-centred, it has set its sights on pastures new, embracing a much broader and far-reaching set of interests. From emotional, eco-friendly, embodied experiences to context, constructivism and culture, HCI is changing apace: from what it looks at, the lenses it uses and what it has to offer. At the same time, new technologies are proliferating and transforming how we live our lives, for example, significant growth in techno-dependency and hyper-connectivity. As a result of these changes, HCI researchers and practitioners are facing a congeries of concerns that can be overwhelming. In my talk, I discuss how a different way of thinking is needed to help manage and make sense of the multiple perspectives, challenges and issues that increasingly define HCI.
Dr Takeshi Kurata|
Recent Progress on Augmented-Reality Interaction in AIST|
ABSTRACT: The goal of our research group in AIST, Japan is to create wearable/tangible interfaces which enable intuitive and direct interaction with real/virtual environments and remote person based on augmented reality, computer vision, and multi-sensor fusion. In this talk, I will summarize our research findings on augmented-reality interaction and also introduce our ongoing research projects.
The Challenge of Integrating Probability and Logic|
ABSTRACT: I will begin with some motivation and history of the problem of integrating probability and logic, and its relevance to building agent systems. Then I will describe how higher-order logic provides a suitable framework in which to carry out the integration. The ideas will be illustrated by probabilistic and modal computations that an agent might need to carry out.
University of Washington
Tuned correlation transfer and consequences for coding|
ABSTRACT: Correlations among neural spike times are ubiquitous, and questions of how these correlations develop, and of the impact they have on the neural code, have become central in neuroscience. We address this aspect: if correlations arise from common inputs to different neurons, how do they depend on the cells' operating range -- their rate and regularity of spiking? We first use linear response calculations, simulations, and in vitro experiments to show that correlations between pairs of neurons vary sharply with their firing rates. We then illustrate the consequences via Fisher information, which quantifies the accuracy of encoding. This is joint work with Jaime de la Rocha, Brent Doiron, Kreso Josic, and Alex Reyes.
Microsoft Research Cambridge
User Interface Research at Microsoft Cambridge|
ABSTRACT: The mouse is celebrating its 40th birthday this year. This little device has achieved so much in a short lifetime. Indeed it's mainly because of this device that we have the Graphical User Interface (GUI), comprising of Windows, Icons, Menus and Pointer (WIMP). The mouse and WIMP are an integral part of our daily interactions with computers, but what's next? In this talk I will give examples of novel computing devices that we are building at Microsoft Research, which allow us to shift away from the traditional mouse and WIMP-based interactions. Like other researchers, we are interested in enabling more natural interactions with computers, replacing the mouse with our hands, and making the user interface 'come to life' in more tangible ways. Our future interactions with computers are likely to become more hands-on, more playful, and more aligned with our real-world interactions.
Dr Martin Madera||
Detecting weak similarities among protein sequences by comparison of profile hidden Markov models|
ABSTRACT: The amino-acid sequence of a protein is a simple string over an alphabet of the 20 naturally-occurring amino-acids. A key challenge in bioinformatics is to work out if two proteins are related (descended from a common ancestral protein) using only their sequences as input. If the two proteins are in fact related, then they are likely to adopt similar 3D structures and perform similar biological functions, which is valuable information for bench biologists. In the talk I will describe PRofile Comparer (PRC), my program for comparing two profile hidden Markov models of protein families, and will outline some of the outstanding challenges.
Imperial College London
Combining Reasoning Systems for AI tasks and Automated Discovery in Pure Mathematics|
ABSTRACT: In the the Combined Reasoning Group at Imperial, we combine theorem provers, machine learning systems, constraint solvers, SAT solvers, model generators and computer algebra packages so that the whole is more than a sum of the parts. I will survey a number of projects we have undertaken recently with such combinations, including applications to AI techniques (non-theorem proving, CSP reformulation and automatic invention of fitness functions), and applications to discovery in pure mathematics (in particular, the investigation and classification of finite algebras). I will describe some lessons learned from integrating these systems and give some details of current and future research directions that we are pursuing.
Mapping Unknown Environments for Augmenting Reality|
ABSTRACT: This talk discusses the confluence of two interesting areas: Mapping an unknown environment with a single camera (simultaneous localisation and mapping (SLAM)) and augmented reality for in-situ information visualisation. Unknown environments pose a particular challenge for augmented reality applications because the 3D models required for tracking, rendering and interaction are not available ahead of time. Here, the SLAM approach can deliver the required models without any advance preparation. The talk describes recent advances to approaching the SLAM problem resulting in scalable, consistent and efficient creation of 3D maps of an environment. Further, the extension of such systems is discussed to specifically track and estimate high-level features interesting to an end-user. The automatic estimation of these complex landmarks by the system relieves the user from the burden of manually specifying the full 3D pose of annotations while improving accuracy. These properties are especially interesting for remote collaboration applications where either user interfaces on handhelds or camera control by the remote expert are limited.
The Common State Filter for SLAM|
ABSTRACT: This talk will present the Common State Filter (CSF), a novel and efficient method of Multiple Hypothesis SLAM (MHSLAM) for Kalman Filter-based SLAM algorithms. Conventional MHSLAM algorithms, in particular the Gaussian Sum Filter (GSF), require the entire vehicle and map state to be copied for each hypothesis. The CSF, by contrast, maintains a single, common instance of the vast majority of the map, only copying the map portion that varies substantially across different hypotheses. This results in a significant storage and computational improvement over the GSF. We will present our results when applying the CSF to data association and the applicaton of constraints in the presence of clutter.
Dr. Rafal Bogacz||
Towards integration of reinforcement learning and optimal decision making theories of the cortico-basal ganglia circuit|
ABSTRACT: This talk discusses relationships between two sets of theories: Reinforcement learning theories describing how animals learn which actions to select to maximize reward, and optimal decision making theories describing how animals select individual actions by optimally integrating sensory evidence. The two sets of theories describe different aspects of information processing in the cortico-basal-ganglia circuit, but it has not been investigated before if this circuit can simultaneously implement reinforcement learning and optimal decision making. This talk demonstrates that, for a class of simple tasks, this circuit can closely approximate optimal decision making when the stimulus-response mapping has been learned in the basal ganglia according to reinforcement learning models. However, for a more general class of tasks (with unequally rewarded actions), a biologically realistic modification of reinforcement learning models is required to allow optimal decision making.
Mentor Graphics Corp.
Digital Convergence Demands on Design and Verification|
ABSTRACT: Digital convergence is providing a direct gateway into the diverse world of digital content-and is changing our lives. In the future (which is sooner than you might think), almost every device will be a network device. The virtual office will fit nicely in your pocket providing access to telephone, email, video conferencing, fax, spreadsheets, presentations, internet-all controlled through speech recognition or multi-touch interactive displays. At home, the telephone, personal computer, mail, newspaper, magazines, DVD player, television, internet, and home environment controls will all converge into a few all-purpose devices that are networked together. The emerging global network will provide the plumbing for worldwide access to digital content, enabling users to shift their focus to the importance of the content and put less focus on the technical steps required to access the content. What is required of design and verification to meet the demands brought by today's digital convergence? This presentation will review today's emerging technologies, ranging from advanced transaction-level constrained-random coverage-driven verification environments, power analysis, static and dynamic clocking analysis, and functional formal verification. With the individual technologies, we are starting to witness a convergence of verification technologies that result in improved productivity with higher coverage results. This presentation will conclude with an overview on what Mentor Graphics is doing in the verification space to meet today's digital convergence demands.
Prof Dan Reuckert|
Imperial College London
Quantification of growth and motion using non-rigid registration ,1.30pm|
ABSTRACT: Three-dimensional (3D) and four-dimensional (4D) imaging of dynamic stuctures is a rapidly developing area of research in medical imaging. In this talk we will show how non-rigid registration techniques can be used for the the detection of temporal changes such as growth or atrophy in brain MR images. We will also show how non-rigid registration can be used to analyze the motion of the heart from cardiac MR images.
Miguel A. Nacenta|
University of Saskatchewan
Issues in multi-display environment design |
ABSTRACT: Multi-display environments - systems with two or more displays - are becoming popular due to improvements in technology and the reduced cost of new displays. Simultaneously, new kinds of interactive surfaces are now available that take the computer out of the office desktop into the walls, tabletops and pockets of users. Most current interfaces are, however, still designed with a single-display paradigm in mind. My work investigates issues in multi-display interaction design and proposes alternatives for the design of heterogeneous MDEs, that is, environments composed of multiple kinds of displays. Three main areas are explored: low-level targeting in multi-monitor systems, perspective-aware interaction, and ad-hoc mobile meetings.
Dr. Peter Hall|
University of Bath
A hat is a hat is a hat, whether real, photographed or drawn.|
ABSTRACT: The problem of learning the class identity of visual objects has received considerable attention recently. With rare exception, all of the work to date assumes low variation in appearance. Consequently objects are depicted in one style only, usually photographic. The same object depicted in other styles --- as a drawing, perhaps --- cannot be identified reliably. Yet humans are able to name the object no matter how it is depicted, and even recognise a real object having previously seen only a drawing. This talk describes a classifier which is unique in being able to learn class identity no matter how the class instances are depicted.
A hat is a hat is a hat, whether real, photographed or drawn.|
ABSTRACT: Recent advances in fluid simulations have yielded exceptionally realistic imagery. However, most algorithms have computational requirements that are prohibitive for real-time simulations. The exception are Fourier based solutions, but due to wrap-around, boundary conditions are not naturally available, leading to inaccuracies near the boundary. We show that boundary conditions can be imposed by solving the mass conservation step using cosine and sine transforms instead of the Fourier transform. Further, we show that measures against density dissipation can be computed using cosine transforms and we describe a new method to compute surface tension in the same domain. This combination of related algorithms leads to real-time simulations with boundary conditions.
Descent into Cache-Oblivion (Slides PDF 230kB)|
ABSTRACT: When analysing algorithms, it is common to do so in the RAM model of computation. In this model, accessing an arbitrary memory location is a assumed to take constant time. However, for a modern computer with a memory hierarchy, this is often not the case. When the amount of data considered is large, time spent waiting for I/O requestscan dominate runtime which is not captured in the RAM model. The talk will start with an introduction to the I/O and Cache-Oblivious models which seek to resolve this problem by analysing asymptotic I/O usage. The remainder of the talk will contain an overview of some of the fundamental results in Cache-Oblivious Algorithms and Data Structures.(A22)
Paul Duff||Clustering Auto-calibration for Ultrasonic Positioning Systems(Slides PDF 250kB) (A20)|
Prof. Neil Burgess||Research-led teaching for a research-led University?|
Dr Martin Poulter||
The Functional Equation approach to Optimal Reasoning|
ABSTRACT: This talk looks at inference and decision from a logician's perspective, drawing especially on the work of Edwin Jaynes. Starting with the many flavours of logical consistency, we witness the final triumph of Shannonian Bayesianism and consider how to identify rational, irrational and non-rational factors in inference.
Prof. Ross King|
The Robot Scientist Project |
ABSTRACT: We are interested in the automation of science for both philosophical and technological reasons. To this end we built the first automated system that is capable of automatically: originating hypotheses to explain data, devising experiments to test these hypotheses, physically running these experiments using a laboratory robot, interpreting the results, and then repeat the cycle. We call such automated systems "Robot Scientists". We applied our first Robot Scientist to predicting the function of genes in a well-understood part of the metabolism of the yeast S. cerevisiae. For background knowledge, we built a logical model of metabolism. The experiments consisted of growing mutant yeast strains with known genes knocked out on specified growth media. The results of these experiments allowed the Robot Scientist to test hypoteses it had abductively inferred from the logical model. In empirical tests, the Robot Scientist experiment selection methodology outperformed both randomly selecting experiments, and a greedy strategy of always choosing the experiment of lowest cost; it was also as good as the best humans tested at the task. To extend this proof of principle result to the discovery of novel knowledge we have: built a fully automated Robot Scientist called "Adam", formed a model of most of the known metabolism of yeast, and developed an efficient way of inferring probable hypotheses based on bioinformatics. Adam has now generated several novel hypotheses about which genes encode orphan enzymatic reactions in yeast and experimentally confirmed them. We have additionally confirmed these hypotheses using "gold standard" manual methods.
Prof. Nigel Smart||
Is SSL provably secure ? (Slides PDF 130kB)|
ABSTRACT: In this talk I will describe some joint work with P. Morrissey and B. Warinschi on the SSL protocol. We attempt to show that an abstraction of the SSL protocol does provide a secure key agreement protocol, and we quantify exactly what properties are required of any subprotocol which produces the pre-master secret.(A38)
Prof Mike Sternberg|
Imperial College London
Computational approaches for drug discovery - protein bioinformatics and logic-based chemoinformatics Engineering Colloquium, 4pm, QB1.15 |
ABSTRACT: This talk will describe the development of novel computational methods to aid drug discovery and development. Developments in protein bioinformatics will be described including our protein structure prediction tool PHYRE and a novel approach (CONFUNC) to predict the function of a protein from its sequence. Both are available for use by the community via our web site. The next step along the drug discovery pipeline often is the identification of low molecular weight molecules with the required biological activity. In a series of studies in collaboration with Professor Stephen Muggleton (Dept of Computing, Imperial College London), we have developed a novel machine learning approach to direct the discovery of new potential drugs. This is known as Support Vector Inductive Logic Programming which generates logic-rules using inductive-logic programming and then obtained quantitative predictions using a support vector machine.(47)
Dr. Ksenia Shalonova||
Automatic Learning of Word Structure: Two Novel Algorithms (Slides PDF 150kB)|
ABSTRACT: Automatic learning of word structure is essential in all Speech and Language technologies such as Text Mining, Machine translation, Speech recognition, Speech Synthesis etc. Depending on the complexity of a word structure, this task is equivalent to learning either regular languages (finite state automaton) or context-free languages. The current talk will present two novel algorithms for learning regular languages within the framework of word structure using sets of strings as an input data. The first algorithm combines string alignment and a tree-like data structure, and the second algorithm is based on multiple-sequence alignment and a novel heuristics that has got both general philosophical and linguistic interpretation. The proposed algorithms have the potential of being applied to other tasks.(A41)
Prof. Christine Orengo|
Comparative Genomics to Predict Protein Functions (Slides PDF, 6.5MB)|
ABSTRACT: There are now more than 500 completed genomes for organisms from all kingdoms of life. Information on the sequences and structures from the genomes can be mined to understand how function evolves in protein families and to develop new methods for predicting functional networks.(A45)
Prof. David Liben-Nowell|
Geographic Routing in Social Networks (Slides PDF, 3.8MB) Engineering Colloquium, 4pm, PLT|
ABSTRACT: If you have an urgent personal message that you have to send to Phill Jupitus by passing it from one friend to another, how would you send it to him? Beginning with the "six degrees of separation" experiments of Stanley Milgram in the 1960's, interest in the structure and analysis of social networks has blossomed into a full-fledged area of research. More specifically, many interesting studies of our "small world" -- so called because two arbitrary people are likely to be connected by a short chain of intermediate friends -- have been carried out over recent years. In this talk, I will highlight the last forty years of research into small-world phenomena, and then describe a new model for friendship formation in social networks that is based upon the geographic locations of the people within it. I'll show that the model accurately predicts about two-thirds of self-reported friendships in the LiveJournal online blogging community. Furthermore, I'll describe theoretical results that say that every social network that adheres to this model will be a small world. Time permitting, I will also mention applications to American neoconservativism, Manchester United, and the people who live in North Cornwall. Portions of this talk represent joint work with Ravi Kumar, Jasmine Novak, Prabhakar Raghavan, and Andrew Tomkins (Yahoo! Research Labs); and David Barbella, Anna Sallstrom (Carleton College), George Kachergis (Indiana), and Ben Sowell (Cornell).(A38)
Dr. Kerstin Eder||
Declarative Discovery of Complex Design Behaviour: A Radically new Approach to Coverage-Directed Test Generation (Please eMail Speaker for Slides) |
ABSTRACT: [...] This talk introduces the basic principles of state-of-the-art simulation-based design verification and outlines an application of declarative machine learning techniques to discover complex internal design behaviour. The learning results are presented in a formal logic which can be turned into human readable explanations. These explanations can help engineers comprehend the internal functions of the design and thus make them see what tests to run to exercise specific functional behaviour. In addition, the learning output provides a base for automatic test generation.(A17)
Dr. Xianghua Xie||
Active Contouring(Slides PDF 2.8MB)|
ABSTRACT: Deformable contour models or snakes are commonly used in image processing and computer vision due to their natural handling of shape variation and independence of operation (once initialised). Although significant improvements have been made in this field over the last decade, there are still great challenges in handling weak edges, image noise, and complex topologies and geometries which often occur in real world images. In this talk, we will examine the three fundamental issues in designing an active contour model, namely contour representation and its numerical solution, object boundary description and stopping function design, and initialisation and convergence. The speaker will show how we can enhance the performance of active contour models, such as handling weak edges and improving initialisation invariancy, through designing novel external force field. A novel contour representation method which transfers difficult PDE problem to simpler ODE problem will also be presented. More sophisticated topological changes are now readily achievable. A couple of applications to medical image analysis of the active contours will be briefly discussed as well.(A18)
Prof. Tadashige Ikeda|
Constitutive Model of Shape Memory Alloys and Application Engineering Colloquium, 4pm, QB1.15|
ABSTRACT:The presentation will start by showing a simple yet accurate macroscopic constitutive model of shape memory alloy (SMA) that has been proposed. The features of this model are (1) energy-based phase transformation criterion, (2) one-dimensional phase transformation rule based on a micromechanical viewpoint, (3) dissipated energy with a form of a sum of two exponential functions, (4) duplication of a strain rate effect, and (5) adaptability to multi-phase transformation. It is shown that this model can easily yet accurately duplicate stress-strain relationship including minor hysteresis loops and strain rate effect of not only a wire but also a bar and a tube. The second part of the presentation will be devoted to introducing a smart vortex generator (SVG) concept as an example of the application of SMA to aircraft. SVG autonomously transforms between an upright vortex-generating position in take-off and landing and a flat drag-reducing position during a cruise by ambient temperature change. Using the constitutive model introduced before and a conceptual demonstration model, it will be shown that SVG can work.
Prof. Jeff Moehlis|
University of California
Coarse-Grained Analysis of Collective Motion |
ABSTRACT:Individual based models, which can incorporate very detailed descriptions of individual organisms' dynamics and interactions, are a useful tool for understanding collective behavior in biological systems; however, such a model's complexity can make it computationally and analytically intractable for many applications. Analysis can often be more readily performed for population-level models; however, it may not be possible to derive such a model for a given individual based model. Recently, the "equation-free" computational framework has been proposed as a way to analyze population-level dynamics associated with an individual based model, even when the population-level equations are not known explicitly. The idea is to use intelligently-initialized short integrations of the individual based model to estimate necessary population-level quantities on demand. In this talk, we apply the equation-free framework to understand the population-level behavior for a model for schooling fish. In particular, we focus on a case for which the model can give co-existing stable stationary and mobile collective behaviors. Stochastic effects cause the school to switch between these behaviors, leading to stick-slip dynamics which can be characterized using an effective potential in terms of a population-level coarse variable. The effective potentials found using equation-free techniques compare very favorably with those obtained (with much more computational effort) from long-time simulations.(A34)
Spectus3D: Interactive Volume Rendering (Slides PDF 1.2MB)|
ABSTRACT: Modern medical scanning technologies produce large quantities of inherently three dimensional data which is often viewed as two dimensional slices. Volumetric rendering can show these data sets as a whole, allowing structures in the body to be seen as they really exist. However, volume methods have been traditionally very computationally expensive. This talk will present the method of rendering volumetric data using texture mapped, intersecting, orthogonal slices aligned with the volume. By using OpenGL through Java and Java 3D, very fast rendering can be achieved, allowing user interaction with the volume, while sidestepping many of the drawbacks of traditional methods. A simple white matter brain extraction algorithm will also be discussed, showing the use of pseudo-colour to represent certain tissue types over a wider colour range than they are usually viewed at. A live demo of the program in action will be given. Some screenshots and samples can be found here (external link). (A19)
Dr. Simon Hollis||
On-chip Communication to Support Highly Parallel Systems (Slides PDF, 1.7MB)|
ABSTRACT: Increases in integrated circuit transistor density have enabled the creation of highly complex on-chip computational circuity. The law of diminishing returns has forced designers to migrate from conventional, highly optimised and single-threaded processor designs to systems incorporating many copies of much simpler processing elements. This paradigm shift has brought about a change in thinking about the various trade-offs available when designing a chip to support an application. At the hardware level, the biggest such shift has been driven by the change from the "Wires are free" methodology, to one realising that the most critical factor in the performance of a modern system is the cost in both power and latency in simply moving data around. This seminar will introduce the problems of interconnecting tens to thousands of processing cores, and how a migration to Networks-on-Chip (NoCs) allows the creation of ever-more complex systems. Different styles of NoC will be reviewed, and a design for an area-efficient implementation will be introduced. Finally, some future directions will be suggested. (A24)
Carl H Ek and Dr. Neil Lawrence|
(Opening of the Season 07/08)
Shared Gaussian Process Latent Variable Models (Slides PDF, 3.7MB)|
ABSTRACT: A common situation in Machine Learning is to model relationships between multiple observations of an underlying phenomena. Often we are interested in answering questions in one representation space given a corresponding. This can be answered by modeling the conditional probability of the desired input space given the sought output space. For many types of data modeling this distribution can be very complex due to the high-dimensionality and a limited amount of training data. If the observations have a functional relationship this can be done by a regression model. In practice assuming a functional relationship is often a too simplistic and constrained assumption. In this talk we will present a model which based on the assumption that the observed data have been sampled from a low-dimensional manifold decomposes the observed data-spaces in to set of low-dimensional latent representations. The full latent representation will represent the full variance of both data-sets with a subspace representing the variance shared between the observations. We have developed a convex algorithm for finding the decomposition and a probabilistic models based on Gaussian Processes for performing validation and inference over the model. We will show how this model can be used to perform multi-valued regression on a Human Pose from Silhouette estimation task. (A48)
Prof. Min Chen|
(Preseasonal Invited Talk)
The Complexity of Video Visualization. |
ABSTRACT: Video visualization is a computation process that extracts meaningful information from original video data sets. Such visualizations can convey much more information [...] than a few statistical indicators. With carefully prepared visualizations, the human vision system, perhaps the most intelligent vision system, is able to become accustomed to certain kinds of visual patterns, and react to unusual levels or patterns of activities that need further investigation. In this talk, the speaker will give an overview of the state of the art of video processing and video visualization, and compare the general complexity of a computation pipeline for video processing with that for video visualization. The speaker will also briefly present the recent work, carried out jointly by Swansea and Stuttgart. He will describe the use of combination of volume and flow techniques for video visualization, a user study to examine whether users are able to learn to recognize visual signatures of motions, and to assist in the evaluation of different visualization techniques, and the application of the developed techniques to a set of application video clips. This work has demonstrated that video visualization is both technically feasible and cost-effective. It provided the first set of evidence confirming that ordinary users can accustom to the visual features depicted in video visualizations, and can learn to recognize visual signatures of a variety of motion events.
|June 28||Ben Daubney||
Using Low Level Motion to Estimate Human Pose|
ABSTRACT: There is currently much interest in the computer vision community to estimate human pose from a monocular camera, despite this interest the problem still remains unsolved due largely to the many degrees of freedom a human body contains. Whilst current approaches to solve this problem are dependent on modeling appearance we present a system that is entirely dependent on motion. In this seminar we will demonstrate that motion can be used from low level recognition to high level pose extraction of humans whilst performing gaited motions.
|June 28||Prof. David May||
Communicating Process Architecture for Multicores (Slides PDF, 64kB) |
ABSTRACT: Communicating process architecture can be used to build efficient multicore chips scaling to hundreds of processors. Concurrent processing, communications and input-output are supported directly by the instruction set of the cores and by the protocol used in the on-chip interconnect. Concurrent programs are compiled directly to the chip exploiting novel compiler optimisations. The architecture supports a variety of programming techniques, ranging from statically configured process networks to dynamic reconfiguration and mobile processes.
Dr. Artur Loza|
Statistical Model-based Fusion of Noisy Images (Slides PDF, 5.1MB) |
ABSTRACT: In this talk, a new method for multimodal image fusion, based on statistical modelling of wavelet coefficients, will be presented. The algorithm incorporates Laplacian bivariate parent-child statistical dependencies into a popular image fusion technique, called Weighted Average. The interscale dependency is brought in the form of shrinkage functions. The proposed method has been shown to perform very well with noisy datasets, outperforming other conventional methods in terms of fusion quality and noise reduction in the fused output. (A19)
Perceptually Optimised Video Coding based on Eye-tracking Analysis (Slides PDF, 2.4MB) |
ABSTRACT: In a world in which video is becoming more prolific and accessible, new and improved video codecs are needed to cope with the added demands of high definition and limited bandwidth. Current video standards work on the signal level and take no account of the way in which the video is being viewed. Previous work has shown that humans exhibit common behaviour when presented with video. We look at how this can be exploited and present a video coding paradigm which allows perceptual based coding. This model is then demonstrated for two specific video contexts. (A29)
Dr. Anthony Steed|
University College London
Me, My Avatar and I (Slides PDF, 2.8MB) |
ABSTRACT: As long as there have been games, there has been social interaction around games; betting, kibitzing and boasting. With the current generation of on-line interactive environments and video games consoles, we are increasing seeing this social interaction being supported in collaborative virtual environments; that is 3D virtual environments where each participant is given their own 3D representation - an avatar.
Avatars have been a topic of academic research for many years. People often react to avatars as if they were real people. This reaction can be quite comprehensive in that we can see reaction ranging from social conformance to fear. In this talk, I will describe both recent academic work in this area, and some observations based on experience working in the games industry. Our own work at UCL suggests that visual realism of avatars is not as important as behavioural realism. The games industry is now realising that the graphics quality has reached a new plateau, and that the avatars they are building need a wider range of behavioural responses. (A25)
|May 10||Prof. Peter Flach||
Putting Things in Order: Learning to Rank (Slides PDF, 2.9MB) |
ABSTRACT: Classification is a task familiar to many computer scientists. For instance, we may want to be able to classify an email as spam or ham, an image as an outdoor scene or not, or a DNA sequence as containing a particular gene or not. There are numerous techniques to learn such a classifier from labelled training data. Often, such techniques proceed by training a model to output scores which can be thresholded to obtain a classifier. Without a threshold, such a model is a ranker: it puts instance in the order of highest to lowest expectation that it belongs to the positive class. This raises a number of questions, such as: Is the training method for rankers different from that for classifiers? Are the scores output by rankers meaningful? Can we do ranking without calculating scores? In this talk I will address these questions with the help of ROC analysis. (A38)
Dr. Guido Herrmann|
Prevention of Controller Windup - a Framework for Linear, Nonlinear and Adaptive Control Schemes Colloquium, 4pm, PLT, QB |
ABSTRACT: [...] In this seminar, anti-windup (AW) compensation schemes will be presented which are initially developed on the basis of a linear control loop with actuator limitations. The principal idea is to incorporate into the AW scheme a copy of the plant model augmented by a parameter filter. This allows to decouple the non-linear characteristics due to actuator limitations from the original control loop. This leads to significant transparency of the AW compensation approach and to a convex design scheme. Principles of this AW approach are easily extended to non-linear control schemes such as non-linear dynamic inversion control and non-linear adaptive neural network control. Practical examples (e.g. hard disk drive control) show the feasibility of the AW compensation approach.
Dr. Vittorio Ferrari|
University of Oxford
Accurate Object Detection with Deformable Shape Models Learnt from Images (Slides PDF, 5.9MB)|
ABSTRACT: In this talk we present an object class detection approach which fully integrates the complementary strengths offered by shape matchers. Like an object detector, it can learn class models directly from images, and localize novel instances in the presence of intra-class variations, clutter, and scale changes. Like a shape matcher, it finds the accurate boundaries of the objects, rather than just their bounding-boxes. This is made possible with a novel technique for learning both the prototypical shape of an object class and a statistical model of how it can deform, given just images of example instances. Once the model is learnt, we localize novel instances in cluttered images by combining a Hough-style voting process with a non-rigid point matcher. Through experimental evaluation, we show how the method can detect objects and localize their boundaries accurately, while needing no segmented training examples (only bounding-boxes).
|March 29||Dr. Elisabeth Oswald||
Power Analysis Attacks (Slides PDF, 651kB) |
ABSTRACT: In this seminar I will talk about security issues for cryptographic devices such as smart cards. I will sketch how such devices can be attacked in general and how power analysis attacks work in particular. I will also give a brief overview on countermeasures and their effectiveness.
|March 22||Dr. Janko Calic||
From Film to Comics ... and Back (Slides PDF, 6.8MB) |
ABSTRACT: This talk will outline an approach to automatic creation of comics from videos and how it can improve the video production chain. The main focus of the talk is efficient summarisation of large scale video collections. In order to represent large amounts of information in the form of a video key-frame summary, this work studies narrative grammar of comics, and using its universal and intuitive rules, lays out visual summaries in an efficient and user centered way. The system ranks importance of key-frame sizes in the final layout by balancing the dominant visual representability and discovery of unanticipated content utilising a specific cost function and an unsupervised robust spectral clustering technique. A final layout is created using a discrete optimisation algorithm based on dynamic programming.
|March 15||Dr. Julian Gough||
Protein Sub-classification (Slides PDF, 503kB) |
ABSTRACT: Proteins are encoded by a sequence of amino acids with an alphabet of 20. Over time as these proteins evolve they undergo a series of single point mutations. These mutations accumulate, and eventually the same protein in two different species may have completely different amino acid sequences. Hidden Markov models are relatively successful in recognising distant evolutionary relationships in sequences which apparently have nothing in common. However, proteins which are evolutionarily related might have different functions (due to Darwinian selection operating on the results of the accumulating mutations). I will present an algorithm for sub-classifying protein sequences for which hidden Markov models have detected an evolutionary relationship, into families which share a common or related function.
|March 08||Dr. Henk Muller||
A 10mW Wearable Positioning System (Slides PDF, 55kB) |
ABSTRACT: In this talk we present an approach to designing a low power location system. The system consists of an infrastructure of ultrasonic transmitting devices and a receiver device on the wearable. The receiver comprises an ultrasonic pick-up, an op-amp and a PIC. The PIC implements a particle filter for estimating X and Y positions, which is robust in an environment with limited precision. We have modified the traditional particle filter in order to operate on a micro-controller with limited memory capabilities. (A50)
Dr. Joao Marques-Silva|
University of Southampton
Model Checking with Boolean Satisfiability (Slides PDF, 400kB) |
ABSTRACT: The evolution of SAT algorithms over the last decade has motivated its application to model checking, initially through the utilization of SAT in bounded model checking and, more recently, in unbounded model checking. Among the techniques proposed for unbounded model checking, the utilization of interpolants entails significant advantages, that motivate its practical usage. This talk provides an overview of SAT, SAT-based bounded model checking and the utilization of interpolants in unbounded model checking. Moreover, improvements to the original interpolant-based unbounded model checking algorithm are described. (A19)
A Model for How People May Identify Objects (Slides PDF, 12kB)|
ABSTRACT: [...] This talk will explore one possible way in which people identify objects from the raw sensory data that serves as input to the various perceptual systems that humans have. Following on from a discussion of the cognitive model some of its implications for the design of software will be explored. Whilst a lot of the implications are centered around user interfaces and human computer interaction the model also suggests ways in which improvements in networking performance and the rendering of visual, auditory, and haptic data could be obtained. (A22)
Dr. John Chiverton||
Probabilistic Partial Volume Modelling of Biomedical Tomographic Image Data |
ABSTRACT: [...] Three-dimensional volumetric data points (voxels) often enclose finite sized regions so that they may contain a mixture of signals which are then known as partial volume voxels. [...] Clinical applications of biomedical imaging data often require accurate estimates of tissues or metabolic activity, where many voxels in the data are partial volume voxels. [...] The probabilistic models discussed and presented in this presentation provide a generic mathematically consistent framework in which the partial volume effect is modelled. Novel developments include an improved model of an intensity and gradient magnitude feature space to model the PV effect. [...] (A31)
Prof. Bernhard Ganter|
Dresden University of Technology
Data without Numbers: Computing with Formal Concepts (Slides PDF, 1.2MB) Colloquium, 4pm, 1.15QB |
ABSTRACT: Formal Concept Analysis is a mathematical theory that offers a formalisation of 'concept' and 'conceptual hierarchy'. It provides a rich methodology based on the algebraic theory of complete lattices. [...] In particular its data visualisations are highly intuitive. [...] The author focuses on questions of knowledge exploration. A key issue of this talk is how to design a compact, correct and complete representation of certain, well limited areas of knowledge. The methods can be widely generalised to structurally fit a multitude of applications. [...] (Complete Abstract and Bio, PDF 20kB) (A53)
Dr. Carl Seger|
Intel CAD Labs
Integrating Design and Verification - From Simple Idea to Practical System (Slides coming soon) |
ABSTRACT: [...] In the presentation, we introduce the Integrated Design and Verification (IDV) system that has been developed at Intel for the last 5 years. IDV combines the design and validation efforts so that the task of design validation (i.e., "Did we capture what we actually wanted?") is significantly simplified by means of a much smaller and much more stable high-level model. [...] (A48)
|2006||December 14||Denis Chekhlov||
Robust Real-time Visual Monocular SLAM (Slides PDF, 500kB) |
ABSTRACT: Real-time visual Simultaneous Localisation and Mapping (SLAM) with a single camera is a challenging and important problem. Recently a need has arisen not just to make SLAM fast, but also robust in order to survive erratic motion and occlusions, which is important in real applications. In the first part of the talk there will be a quick introduction to visual SLAM using a stochastic filter. The second part will concentrate on robust data association (feature matching) for real-time SLAM systems. [...] (A25)
|December 07||Prof. Dhiraj Pradhan||
Application of Galois Fields to Verification and Synthesis |
ABSTRACT: This talk consists of two main parts. Firstly, there will be a brief introduction to Galois switching theory followed by applications to verification and synthesis.
Prof. Michael Leyton|
A Generative Theory of Shape Colloquium, 3pm, 1.18QB |
ABSTRACT: This talk gives an introduction to `A Generative Theory of Shape'. The purpose of the talk is to develop a generative theory that has two properties regarded as fundamental to intelligence - maximizing transfer of structure and maximizing recoverability of the generative operations. These two properties are particularly important in the representation of complex shape. [...] (A25)
Dr. Eerke Boiten|
University of Kent
Quantifying Compromises in Correctness (Slides PDF, 193kB) |
ABSTRACT: Correctness in computer science is normally an absolute: either a system satisfies its specification, or it doesn't. This does not account for the fact that "real" programs are viewed as "mostly" correct, which often actually seems to be useful and acceptable. This talk will argue that such views can be given formal underpinnings (sometimes!) - by quantifying the degree of correctness and identifying parameters that control this. [...] (A26)
Prof. Tom Melham|
University of Oxford
Practical Formal Verification at an Industrial Scale (Slides PDF, 342kB)|
ABSTRACT: Formal hardware verification aims to improve the quality of hardware designs using logical reasoning supported by software tools. In this talk, Prof. Melham describes some collaborative work with researchers at Intel's Strategic CAD Labs that aims to make formal verification a practical, everyday tool for industrial hardware design -- specifically high-performance microprocessor design. [...] (A42)
Prof. Andrew Brown|
University of Southampton
(Preseasonal Invited Talk)
|Electronic Design Automation - a perspective.|
Prof. David Hogg|
University of Leeds
Unsupervised Learning in Computer Vision |
ABSTRACT: Learning is now centre-stage in studies of computer vision. The talk will explore prospects for completely unsupervised learning, including the role of embodiment and a review of recent progress on object and activity recognition. The main part of the talk will examine the role of conceptual reasoning in steering learning, including reasoning about the intentionality of visible agents.
|February 16||Will Pearson||
What Do I Mean? - Using Semantics to Improve the User Experience |
ABSTRACT: As someone once said, 'You don't want to be looking at a screen whilst walking along, you might fall down the steps!' This talk will explore the the relationship between physical signs and conceptual knowledge, how these relationships are formed and how they can be utilised in user interfaces to generate a better experience for the user across a range of contexts and devices. Different interaction modalities, such as haptics and audio, will be briefly presented and their application in presenting the same conceptual knowledge in a variety of different physical forms discussed.
|February 09||Dr. Rafal Bogacz||
The Brain Implements Optimal Decision Making Between Alternative Actions |
ABSTRACT: Recent experimental studies have identified a number of brain regions critically involved in solving the problem of 'action selection' or 'decision making'. These regions include cortical areas hypothesized to integrate evidence supporting alternative actions, and a set of nuclei called: the basal ganglia, hypothesised to act as a central 'switch' in gating behavioural requests. However, despite our relatively detailed knowledge of basal ganglia biology and its connectivity with the cortex, no formal theoretical framework exists that supplies an algorithmic description of these circuits, and that can explain why they are organized in the specific way they are. This talk addresses this question by showing how many aspects of the anatomy and physiology of the circuit involving the cortex and basal ganglia are exactly those required to implement the computation defined by an asymptotically optimal statistical test for decision making - the Multiple Sequential Probability Ratio Test (MSPRT). The resulting model of basal ganglia provides a rationale for their inter-nucleus connectivity and the idiosyncratic properties of particular neuronal populations.
|February 02||Dr. Erik Reinhard||
Image-Based Material Editing |
ABSTRACT: Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a method for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image as input. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies, while being sensitive to others. By adjusting our simulations to be careful about those aspects to which the human visual system is sensitive, we are for the first time able to demonstrate significant material changes on the basis of a single photograph as input.
Prof. William Clocksin|
Oxford Brookes University
How I Solved the GCHQ Codebreaking Challenge |
ABSTRACT: As a public relations exercise, GCHQ (Government Communications Headquarters) occasionally presents codebreaking challenges on its website. These challenges are not state-of-the-art cryptography, but are just a bit of fun designed to appeal to amateur codebreakers and puzzle enthusiasts. The challenge issued just before Christmas 2004 was a more difficult than usual, and had some interesting features such as very short message lengths. In this seminar I'll describe how I solved the challenge and developed some tools in Prolog to assist with finding the solution.
Prof. David Lowe|
University of British Columbia
Invariant Feature Matching for Image Panoramas and Augmented Reality |
ABSTRACT: This talk will present an overview of some recent applications of invariant local feature matching to problems in computer vision. A brief summary of the SIFT approach to image matching will be presented. Then, the problem of detecting image panoramas from unordered sets of images will be defined. Our method for panorama detection can identify small overlapping image regions within large image sets in time that is linear in the number of images. We use robust bundle adjustment and multiband blending to generate seamless panoramas. In addition, recent work on augmented reality will be described. This solves the problem of 3D structure and camera location from multiple views to generate accurate 3D models. Real-time recognition and subpixel pose determination is used to insert synthetic objects into a scene. A novel approach will be presented for minimizing jitter in augmented reality pose solutions.
|January 12||Tilo Burghardt||
Automatic Visual Identification of Patterned Organisms |
ABSTRACT: An accurate mapping from images of a uniquely patterned but deformable object to the probability of its identity within a group can enable a visual system to individually identify uniquely patterned organisms. In this talk I will outline a new approach that aims at solving this problem by exploiting the unique patterning of organisms as well as visibility and deformation statistics of the underlying deformable 3D object. Based on an initially sparse set of robustly extracted key-points the system yields the object pose by employing a model based and truly 3-dimensional pose recovery technique [...] Once a hypothesis for deformation and pose is found the visible surface pattern of the object can be extracted and compared to a known set of patterns. Finally, a statistical analysis of the matched patterns yields a probability of the object identity within a group. [...]
|2005||December 15||Richard Gillibrand||
Cost Prediction for High Fidelity Global Illumination Rendering |
ABSTRACT: [...] This seminar presents a comparison of different pixel profiling strategies which may be used to predict the overall rendering cost of a high fidelity global illumination solution. A technique based on a fast rasterised scene preview is described which provides a more accurate positioning and weighting of samples, to achieve accurate cost prediction.
|December 08||Dr. Nigel Smart||
LASH: A Lattice Based Hash Function |
ABSTRACT: We present a practical cryptographic hash function for which the difficulty of finding collisions and preimages is related to hard problems in lattices. The hash function is based on ideas of Goldreich, Goldwasser and Halevi. We show that by suitable parameter choices we can produce a hash function which is comparable in performance to existing deployed hash functions such as SHA-1 and SHA-2.
|December 01||Rob Egginton||
Predicting Intelligent Behaviour with Probabilistic Temporal Logic |
ABSTRACT: There are many problems that might require the prediction of another intelligent agent's behaviour, whether that agent is a robot, human or ant. In order to reason about the state of an agent at some future time we need to know not only what choices they might make, but when they will make them and how this might change their state. In this talk I'll describe a possible solution method based on probabilistic temporal logic, and highlight some of the challenges in making predictions both tractable and generalisable to other agents and situations.
|November 24||Sion Hannuna||
Detecting Ambulatory Quadrupeds in Low Quality Wildlife Video |
ABSTRACT: This talk describes a novel approach to detecting walking quadrupeds in unedited wildlife film footage. Variable lighting, moving backgrounds and camouflaged animals make traditional foreground extraction techniques such as optical flow and background subtraction unstable. [...] Principal component analysis (PCA) is applied to this set of dense flows and eigenvectors not encapsulating periodic internal motion characteristics are disregarded. The projection coefficients for the remaining principal components are analysed as one dimensional time series. [...] These coefficients' relative phase differences are deduced using spectral analysis and degree of periodicity using dynamic time warping. These parameters are used to train a KNN classifier which segments the training data with 95% success rate. By generating projection coefficients for unseen footage, the system has successfully located examples of quadruped gait previously missed by human observers.
|November 17||Dr. Raphael Clifford||
Pattern Matching by Numbers |
ABSTRACT: Pattern matching algorithms have a long and successful history starting from the early days of linear time methods. In recent years, provably fast algorithms for approximate matching have become the backbone for entire disciplines, most notably the hugely successful field of computational genomics. We explore an entirely new avenue for combinatorial pattern matching where the data are numeric. We show that natural measures of the distance between two vectors both allow for faster solutions to classic problems and raise a great number of unanswered questions.
Dynamic Aspects Ltd
A Delta-Driven Execution Model for Semantic Computing |
ABSTRACT: We describe (and demonstrate) the execution model of a computing platform where computation is both incremental and data-driven. We call such an approach delta-driven. The platform is intended as a delivery vehicle for semantically integrated software, and thus lends itself to the semantic web, domain-driven development, and next-generation software development environments. Execution is transparent, versioned, and persistent. This technology - still at an early stage - is called domain/object.
|November 03||Paul Duff||
Auto-Calibrated Active Object Tracking |
ABSTRACT: Position sensing is an important aspect of many pervasive computing applications. It provides contextual information to applications which model relationships between people and their environment. Recent work in the field has focused on developing position sensing systems which are cheap, lightweight and easy to calibrate. In this talk I will present an ultrasonic-only system designed to fulfill these needs. I will explain how our new active tracking system works, and how it differs from our other passive positioning systems. In particular, I will focus on the motivation and algorithms behind auto-calibration and auto-orientation of the system. Finally, I will describe a couple of techniques for tracking one or more objects in a room.
|October 27||Dr. Aram Harrow||
Quantum Algorithms and Basis Changes |
ABSTRACT: The state of an n-bit quantum computer is described by a unit vector in a 2^n-dimensional complex vector space. This means that transformations are possible, such as a square root of NOT or a Fourier transform of the amplitudes of a state, that would not even make sense for classical probability distributions. Some of these transformations, like the quantum Fourier transform, allow for exponential speedups over classical computation. In my talk, I'll review what these transformations mean and what can be accomplished with them. Then I'll talk about work I've done on efficiently implementing an operation known as the Schur transform, which is based on a quantum analogue of the type classes used in classical information theory.
|October 20||Dr. Bob Planque||
Speed-Accuracy Tradeoffs in Collective Decision Making |
ABSTRACT: Information flows in ever greater quantities on dynamically changing networks. There is thus an increasing need to understand how to put computing power where it is most needed. Most traditional centrally controlled scheduling algorithms are not designed for such tasks, and they should be replaced by more decentralized systems. Ant colonies provide a prime example of a decentralized decision making system as a potential source of inspiration for distributed computing environments. Through the course of evolution, ants have acquired sophisticated mechanisms that use only local information, which they integrate into collective decisions. In this talk I will combine simple models with experimental data to illustrate how ants achieve tradeoffs between speed and accuracy in ant colony emigrations. In particular, I will focus on the question which of the mechanisms are most important to maximize one or the other.
|June 09||Kurt Debattista||
Breaking the Pixel: Component-Based Rendering Systems |
ABSTRACT: [...] Due to ray-tracing having linear complexity to the number of pixels rendered or rays shot, traditional rendering systems attempt to reduce number of primary rays shot. [...] By considering that at each intersection point the reflectance function of the medium is calculated by shooting rays to simulate different properties of the material we can use the individual rays from one intersection point to the next as a finer level of granularity. In this talk we investigate this approach termed component-based rendering and highlight the advantages of adopting a finer level of granularity by removing the recursion when required. We present a framework for component-based rendering which allows the user to control the desired transport equation using a regular expression. This technique is further extended to progressive rendering, time-constrained rendering and perceptually-based rendering. [...]
|June 02||Dr. Dan Page||
Automating Aspects of Cryptographic Implementation: A Cryptography-Aware Language and Compiler |
ABSTRACT: History has shown that programmers do bad mathematics and mathematicians write bad programs. This isn't a good situation if we need to write mathematically oriented, cryptographic software. It is even worse if this software runs on your credit card since it needs to be secure as well as efficient and functionally correct. As a step toward resolving this problem, we present a language and compiler which allows novel cryptography-aware analysis and optimisation phases.
|May 26||Ashutosh Singh||Galois Decomposition of Boolean Functions: An Efficient Synthesis Approach with High Testability|
|May 19||Ashley Montanaro||
Quantum Walks: Definition and Applications |
ABSTRACT: Random walks on graphs are an important tool in computer science. A recently-developed quantum mechanical version of random walks has the potential to become equally important in the study of quantum computation. This talk will provide an introduction to the field of quantum walks, and will be divided into two parts. The first part will explain the concepts behind quantum walks, how they differ from classical random walks, and how a quantum walk on a given graph can be produced. Unlike classical random walks, not every directed graph admits the definition of a quantum walk that respects the structure of the graph. The second part of the talk describes several applications of quantum walks. A number of quantum walk algorithms have been developed that outperform their classical counterparts: I will describe quantum algorithms for network routing, unstructured search, and element distinctness.
|May 12||Michael McCarthy||Whereable Wearables: RF Free Ultrasonic Positioning|
|May 05||Dr. Dave Gibson||Markerless Motion Capture, Particularly of Small Creatures!|
Prof. Mark Zwolinski|
University of Southampton
Pervasive Computing: A SoC Challenge |
ABSTRACT: [...] Although each sensor network may be composed of hundreds or even thousands of identical sensors, these volumes are not sufficient to justify the design of custom integrated circuits. Instead, we need a `pervasive computing platform' - a reconfigurable system that can be quickly adapted to a specific application. In addition to these design challenges, such a system must be capable of self-diagnosis and even limited self-repair in order to maintain the integrity of the network. In this talk, the design challenges for a System on Chip pervasive computing platform will be presented, together with some applications that bring together the various research skills at the University of Southampton.
|April 21||Elias Gyftodimos||
Higher-Order Bayesian Networks: A Probabilistic Reasoning Framework for Structured Data Representations Based on Higher-Order Logics |
ABSTRACT: Bayesian Networks (BNs) are a popular formalism for performing probabilistic inference. The propositional nature of BNs restricts their application to data which can be represented as tuples of fixed length, excluding a vast field of problems which deal with multi-relational data. Basic Terms, recently introduced by John W. Lloyd, are a family of terms within a typed higher-order logic framework which are particularly suitable for representing structured individuals such as tuples, lists, trees, graphs, sets etc. I will present a proposed extension of BNs, Higher-Order Bayesian Networks, which define probability distributions over domains of Basic Terms. We can perform sampling from these distributions, and use that to calculate the answer to probabilistic inference queries. We have also developed a method for model learning given a database of observations. Finally, I will show how we have applied learning and inference on real-world classification problems.
|April 14||Prof. David May||Processing Exabytes and Exabits: Architecture Revisited|
|March 24||Richard Noad||
Side Channel Cryptanalysis with Hidden Markov Models |
ABSTRACT: This talk looks at how Hidden Markov Models can be applied to side-channel cryptanalysis. In particular, we look at how they can help overcome software counter-measures to side-channel cryptanalysis as well as noisy side-channel data. After introducing the basic principles of side-channel cryptanalysis, we examine the methodology of Karlof and Wagner. We finish with a description of recent work in this area which does not rely on the side-channel data being tokenized ahead of time and provides a more realistic error model for noisy side-channels that allows a wider variety of error types, including incomplete side-channel traces.
|March 10||Mark Pupilli||
Towards Robust Camera Tracking in Cluttered Visual Environments |
ABSTRACT: It has long been of interest in computer vision and more recently in the robotics community to estimate camera motion from a sequence of images (structure from motion/simultaneous location and mapping). Traditionally, this requires tracking visual features such as corners or lines between at least two frames and possibly over the duration of the video sequence. We present a novel particle-filter based algorithm for estimating camera motion which combines feature tracking and motion estimation within the same framework.
|March 03||Dr. Mike Fraser||
The Spectre of the Spectator: Designing for Handover in Public Displays |
ABSTRACT: Interaction is increasingly a public affair, taking place in our theatres, galleries, museums, exhibitions and on the city streets. Questioning how a user's interaction with a computer is experienced by spectators is an important feature of these new domains of practice. I'll examine examples from art, performance and exhibition design, comparing them according to the extent to which they hide, partially reveal, transform, reveal or even amplify a user's manipulations. A comparison of these manipulations - including movements, gestures and utterances - that take place around direct input and output reveals four broad design strategies: 'secretive,' where manipulations and effects are largely hidden; 'expressive,' where they are revealed, enabling the spectator to fully appreciate the performer's interaction; 'magical,' where effects are revealed but the manipulations that caused them are hidden; and finally 'suspenseful,' where manipulations are apparent, but effects only get revealed when the spectator takes their turn. [...]