Our Computing, Informatics and Applications (CIA) research group has a wide remit that includes scientific data processing, smart technology and web/internet technologies. We are particularly interested in how computational and artificial intelligence (AI) methods can be used to improve people's lives and increase productivity.
The group has specific expertise in image and signal processing, data visualisation, mobile devices, 2D/3D modelling, distributed computing, wireless environments, the application of sound and imaging technology in assisted living, online virtual environments, the Internet of Things (IoT), and sensor technology. Much of the research the group undertakes is both applied and collaborative, and includes links with industrial partners. Our work is primarily focused in following research areas:
Our AI, Machine Learning, Data Science and Applications team has a wide range of research interests that include: data analysis and machine learning techniques, up-to-date theoretical and practical developments and hardware platforms for AI, signal processing, remote sensing for the IoT and how these converge in the emergence of intelligent systems, advanced statistics, machine learning for structured data and graphs, data integration, probabilistic systems, algorithms for massive datasets and large scale optimisation.
Our Imaging Technologies and Acoustics team focuses on AI-based navigation, image and signal processing systems, 2D/3D image modelling, neural networks, inverse problems, numerical modelling of physical processes (electromagnetics, acoustics, psychoacoustics, quantum mechanics), statistical methods, ray-tracing, architectural acoustics, data sonification, data visualisation, auditory display, sound synthesis, mobile devices and technologies for supporting visual impairment.
Our IoT, Digital Systems and Cloud Computing team is interested in IoT, smart environments, pervasive healthcare, Human Computer Interaction (HCI), adaptive computational systems, advanced internet and mobile technologies, AI and the use of computer aided design (CAD) tools and methods applied to digital systems modelling, rapid prototyping using VHDL and FPGAs, smart sensors, and advanced spectral analysis in biomechanics and biomedical sciences.
Our Semantic Web and Educational Technologies team investigates business information systems, open linked data, intelligent internet search and knowledge modelling, online virtual environments, image processing and data visualisation.
If you would like to find out more about our research, please contact Dr Jin Zhang, Director of the Computing, Informatics and Applications Research Group.
Find out more about our members by exploring their staff profiles.
We have also undertaken other KTPs or Knowledge East of England Partners up to £130k with many local and national companies including Sovereign Installations Ltd, Glazing Vision, LMK Thermosafe, Papershrink Ltd, Calex Electronics.
We welcome enquiries about possible collaborations. Please contact members directly (see individual staff pages for email address and research interests) or contact
Dr George Wilson for any general queries about the research collaborations undertaken by this group.
We offer our Computing and Information Science PhD, and have identified a range of innovative research project opportunities for postgraduate researchers. We also welcome any enquiries about postdoctoral study. Contact our Postgraduate Student Coordinator Dr Cristina Luca with any postgraduate research enquiries.
A soundtrack has been created for the 5,000th Mars sunrise captured by the robotic exploration rover, Opportunity, using data sonification techniques to create a two minute piece of music.
Dr Domenico Vicinanza, from our School of Computing & Information Science, and Dr Genevieve Williams, University of Exeter, created the piece of music by scanning a picture from left to right, pixel by pixel, and looking at brightness and colour information and combining them with terrain elevation. They used algorithms to assign each element a specific pitch and melody. The quiet, slow harmonies are a consequence of the dark background and the brighter, higher pitched sounds towards the middle of the piece are created by the sonification of the bright sun disk.
Domenico presented the world premiere of the piece, entitled Mars Soundscapes in the NASA booth at the Supercomputing SC18 Conference in Dallas in November.
The piece was presented using both conventional speakers and vibrational transducers so the audience could feel the vibrations with their hands, thus enjoying a first-person experience of a sunrise on Mars.
Opportunity is a robotic rover that has been providing photographic data on Mars for NASA since 2004. Earlier this year, it ceased communications following a dust storm. Scientists hope that it may resume its function later this year.
This article originally appeared in the November 2018 issue of 'First', our Faculty Research Newsletter.
Dr Jeanette Chin and Dr George Wilson have been in a collaboration to develop the new weather app ‘Hyperlocal Rainfall’.
The project was a result of an academicindustrial collaboration between our University, Loughborough University, Peterborough Environment City Trust and Meniscus Systems Ltd, and was funded by Innovate UK. Our University received almost £34K. The app, free to download and available across the UK on both Android and iOS platforms, provides detailed local weather forecasts which means anyone planning a walk, bike ride, run or barbecue can see when it’s the best time to do it.
The core platform behind the Hyperlocal Rainfall app is a system called the Meniscus Analytics Platform (MAP). The highlyaccurate rainfall predictions are achieved through the use of ground-truthing algorithms that utilise high altitude weather forecasts and other environmental datasets to calculate the likelihood of rain within a given area.
The team used machine learning techniques and artificial intelligence algorithms to develop a personalisation engine, which is the ‘brain’ for generating personalised information within the app. To help with planning routes and timings for walkers and cyclists, the personalisation engine analyses a combination of realtime data sources including user profile, environmental and location, to provide the user with a unique route recommendation. Jeanette said:
This article originally appeared in the August 2017 issue of 'First', our Faculty Research Newsletter.
Dr Domenico Vicinanza along with Dr Genevieve Williams held a session titled ‘Music and Technology in Physical Therapy and Sport Coaching’ at the Network Performing Art Production Workshop which involved the first remote rowing coaching session using sonification and a mobile phone.
The annual international event on cutting edge technology for remote performances took place in Copenhagen from 3-5 April and is co-organised by GEANT and Internet2 (the European and North American research and education networks respectively). Genevieve and Domenico were part of the programme committee for the event, working on how network and remote sensing could make a difference in remote sport coaching and physical therapy.
During their demonstration, they showcased how the sensors available in every smartphone could be used to run a coaching session over the network. A volunteer sat on a rowing machine in Copenhagen, where Domenico installed a mobile phone on the machine sending measurements of the movement of the oar to Cambridge through an app developed by the two researchers. These measurements, once received by Genevieve in Cambridge, were converted into melodies which formed the basis for the feedback Genevieve provided to the rower in Copenhagen in real-time.
This experiment, the first of its kind, showed how sports coaching and physiotherapy can work using a mobile phone to bring expertise, support athletes and monitor progress in patients (in particular the elderly and lonely) in remote places in the world.
This article originally appeared in the June 2017 issue of 'First', our Faculty Research Newsletter.
Smith, L., Stubbs, B., Hu, L., Veronese, N., Vancampfort, D., Williams, G., Vicinanza, D., Jackson, S., Ying, L., López Sánchez, G.F. and Yang, L., 2019. Is active transport and leisure time physical activity associated with inflammatory markers in US adults: Cross-sectional analyses from NHANES. Journal of Physical Activity and Health. In Press.
Moseley, P., Savini, G., Saenz, E., Zhang, J. and Ade, P., 2019. Detailed characterization of a lenster - A mm-wave flat lens. IEEE Transactions on Antennas and Propagation [E-pub ahead of print]. doi: 10.1109/TAP.2019.2902435.
Zhao, G., Savini, G., Saenz, E., Zhang, J. and Ade, P., 2019. A dual-port THz time domain spectroscopy system optimized for recovery of a sample's Jones matrix. Scientific Reports, 9. doi: 10.1038/s41598-019-39322-y.
Vicinanza, D., Newell, K.M., Irwin, G., Smith, L. and Williams, G.K., 2018. Limit cycle dynamics of the gymnastics longswing. Human Movement Science, 57, pp.217-226.
Sapkota, R.P., van der Linde, I. and Pardhan, S., 2018. How does aging influence object-location and name-location binding during a visual short-term memory task? Aging & Mental Health, pp.1-10. [E-pub ahead of print].
Bright, P. and van der Linde, I., 2018. Comparison of methods for estimating premorbid intelligence. Neuropsychological Rehabilitation, pp.1-14 [E-pub ahead of print].
van der Linde, I. and Bright, P., 2018. A genetic algorithm to find optimal reading test word subsets for estimating full-scale IQ. PLOS ONE, 13(10), e0205754.
Bright, P., Hale, E., Gooch, V.J., Myhill, T. and van der Linde, I., 2018. The National Adult Reading Test: Restandardisation against the Wechsler Adult Intelligence Scale—Fourth edition. Neuropsychological Rehabilitation, 28(6), pp.1019-1027.
Kettouch, M., Luca, C. and Hobbs, M., (2018). Mediator-based framework for keyword search over semi-structured and linked data. Journal of Intelligent Information Systems, 52(2), pp.311-335. doi: 10.1007/s10844-018-0536-1.
Williams, G.K. and Vicinanza, D., 2017. Coordination in gait: Demonstration of a spectral approach. Journal of Sports Sciences, 36(15), pp.1768-1775.
Moseley, P., Savini, G., Zhang, J. and Ade, P., 2017. Dual focus polarisation splitting lens. Optics Express, 25(21), pp.25363-25373. doi: 10.1364/OE.25.025363.
Sapkota, R.P., van der Linde, I., Lamichhane, N., Upadhyaya, T. and Pardhan, S., 2017. Patients with Mild Cognitive Impairment Show Lower Visual Short-Term Memory Performance in Feature Binding Tasks. Dementia and Geriatric Cognitive Disorders Extra, 7(1), pp.74-86. doi: 10.1159/000455831.
Campbell, W., Paterson, J. and van der Linde, I., 2017. Listener preferences for alternative dynamic-range-compressed audio configurations. Journal of the Audio Engineering Society, 65(7/8), pp.540-551. doi: 10.17743/jaes.2017.0019.
Kolarik, A.J., Raman, R., Moore, B.C., Cirstea, S., Gopalakrishnan, S. and Pardhan, S., 2017. Partial visual loss affects self-reports of hearing abilities measured using a modified version of the speech, spatial, and qualities of hearing questionnaire. Frontiers in Psychology, 8(561). doi: 10.3389/fpsyg.2017.00561.
Kolarik, A.J., Pardhan, S., Cirstea, S. and Moore, B.C., 2017. Auditory spatial representations of the world are compressed in blind humans. Experimental Brain Research, 235(2), pp.597-606. doi: 10.1007/s00221-016-4823-1.
Williams, G., Aggio, D., Vicinanza, D., Stubbs, B., Kerr, C., Johnstone, J., Roberts, J. and Smith, L., 2017. Prospective associations between measures of gross and fine motor coordination in infants and objectively measured physical activity and sedentary behaviour in childhood. Medicine, 96(46), e8424. doi: 10.1097/MD.0000000000008424.
Sapkota, R.P., Pardhan, S. and van der Linde, I., 2016. Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis. Journal of Experimental Psychology: Learning, Memory and Cognition, 42(8), pp.1304-1315.
McGonigle, C., van der Linde, I., Pardhan, S., Engel, S.A., Mallen, E.A. and Allen, P.M., 2016. Myopes experience greater contrast adaptation during reading. Vision Research, 121, pp.1-9. doi: 10.1016/j.visres.2016.01.001.
Kolarik, A.J., Moore, B.C., Zahorik, P., Cirstea, S. and Pardhan, S., 2016. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss. Attention, Perception and Psychophysics, 78(2), pp.373-395.
Stanciu, A., Cirstea, M.N. and Moldoveanu, F.D., 2016. Analysis and evaluation of PUF-based SoC designs for security applications. IEEE Transactions on Industrial Electronics, 63(9), pp,5699-5708, doi : 10.1109/TIE.2016.2570720.
Folea, S.C., Mois, G., Muresan, C.I., Miclea, L., De Keyser, R. and Cirstea, M.N., 2016. A portable implementation on industrial devices of a predictive controller using graphical programming. IEEE Transactions on Industrial Informatics, 12(2), pp.736-744.
Sapkota, R.P., Pardhan, S. and van der Linde, I., 2015. Change Detection in Visual Short-Term Memory. Experimental Psychology, 62(4), pp.232-239.
Kolarik, A.J., Cirstea, S., Pardhan, S. and Moore, B.C., 2014. A summary of research investigating echolocation abilities of blind and sighted humans. Hearing Research, 310, pp.60-68. doi: 10.1016/j.heares.2014.01.010.
Hobbs, M., Luca, C., Fatima, A. and Warnes, M., 2014. Ontological analysis for dynamic data model exploration. Electronic Journal of Applied Statistical Analysis: Decision Support Systems and Services Evaluation, 5(1), pp.42-56.
Sapkota, R.P., van der Linde, I. and Pardhan, S., 2014. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task? Frontiers in Aging Neuroscience, 6, pp.346. doi: 10.3389/fnagi.2014.00346.
Kolarik, A.J., Timmis, M.A., Cirstea, S. and Pardhan, S., 2014. Sensory substitution information informs locomotor adjustments when walking through apertures. Experimental Brain Research, 232(3), pp.975-984. doi: 10.1007/s00221-013-3809-5.