How can I help?
How can I help?

Dr Pooneh Bagheri Zadeh

Course Director

Dr. Pooneh Bagheri Zadeh is Course Director in Computer Science at Leeds Beckett University. Her current research covers Image/video processing, Computer Vision in AI drones, Intelligent system, Digital Forensics/Security, Image/Video Forensics, Mobile Forensics, Video Analytics, Super-Resolution and Hyper-Spectral Imaging.

Orcid Logo 0000-0002-2875-3253 Elsevier Logo Scopus ID: 22137238400
Dr Pooneh Bagheri Zadeh staff profile image

About

Dr. Pooneh Bagheri Zadeh is Course Director in Computer Science at Leeds Beckett University. Her current research covers Image/video processing, Computer Vision in AI drones, Intelligent system, Digital Forensics/Security, Image/Video Forensics, Mobile Forensics, Video Analytics, Super-Resolution and Hyper-Spectral Imaging.

Dr. Pooneh Bagheri Zadeh is Course Director in Computer Science at Leeds Beckett University. Her current research covers Image/video processing, Computer Vision in AI drones, Intelligent system, Digital Forensics/Security, Image/Video Forensics, Mobile Forensics, Video Analytics, Super-Resolution and Hyper-Spectral Imaging.

Pooneh received her MSc and PhD degree from Glasgow Caledonian University in Computer Science (2004) and in Computer vision (2008) respectively. She continued her career in industry, working on real-time embedded video analytics systems. She then moved to De Montfort University, and worked as a senior research fellow on an EPSRC project in stereo vision and 3-D reconstruction. She then joined Staffordshire University as a lecturer in computer forensics and security and later as a senior lecturer at Gloucestershire University. She then join De Montfort University as Course Leader for MSc Forensic Computing for Practitioners.

Pooneh published more than 50 international conference and journal papers. She has successfully graduated two PhD students and currently supervising four PhD students. She also examined eight PhD and MPhil students from different universities in the UK.

Pooneh is currently External Examiner at London Metropolitan University in Computer Systems Engineering and Robotics and at Edinburgh Napier University in Digital Forensics and cybersecurity.

Research interests

Dr. Bagheri Zadeh current research is in the area of Image and Video Processing and their applications, Computer Vision in AI Drones, Machine Learning and Deep Learning applications in object classification, Intelligent System, Digital Forensics and Cyber Security, Image and video Forensics, Mobile Forensics, Image and Video compression, all aspects of Video Analytics, Super-Resolution and Hyper-Spectral Imaging.

Publications (47)

Sort By:

Conference Proceeding (with ISSN)

Developing an Intelligent Filtering Technique for Bring Your Own Device Network Access Control

Featured 01 July 2017 International Conference on Future Networks and Distributed Systems ICFNDS '17 Proceedings of the International Conference on Future Networks and Distributed Systems ACM
AuthorsAyesh A, Muhammad M, Bagheri Zadeh P

With the rapid increase in smartphones and tablets, Bring Your Own Devices (BYOD) has simplified computing by introducing the use of personally owned devices. These devices can be utilised in accessing business enterprise contents and networks. The effectiveness of BYOD offers several business benefits like employee job satisfaction, increased job efficiency and flexibility. However, allowing employees to bring their own devices could lead to a plethora of security issues; like data theft, unauthorised access and data leakage. This paper investigates the current security approaches and how organisations can leverage on these techniques regarding policies, risks and existing security techniques to mitigate or halt the security challenges. This research aimed to fill up the access control gap in the BYOD environment by developing an Intelligent Filtering Technique (IFT) using Artificial Intelligence (AI) Technique. Based on the behavioural patterns of device packets Inter-Arrival-Time (IAT) features through network traffic flow packet headers (Such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Internet Control Messaging Protocol (ICMP)).

Journal article

Prevention of crime in B2C E-Commerce: How E-Retailers/Banks protect themselves from Criminal Activities

Featured 30 November 2015
AuthorsAlMajed N, Maglaras L, Helge J, Bagheri Zadeh P
Conference Contribution

Improving Security in Bring Your Own Device (BYOD) Environment by Controlling Access

Featured 31 January 2017 International Conference on Future Networks and Distributed Systems (ICFNDS 2017)
AuthorsMuhammad A, Ayesh A, Bagheri Zadeh P
Conference Proceeding (with ISSN)

A hierarchical multiwavelet based stereo correspondence matching technique

Featured 01 December 2011 European Signal Processing Conference
AuthorsBagheri Zadeh P, Serdean CV

This paper presents a hierarchical stereo correspondence matching technique based on multiwavelet transforms. A global error energy minimization technique is employed to generate a disparity map for each of the four multiwavelet approximation subband pairs. The information in the four disparity maps is then combined using a Fuzzy algorithm to generate a single disparity map. This initial disparity map is estimated at the lowest resolution and needs to be progressively passed on to higher resolution levels. Hence, the search at higher resolution levels is significantly reduced, thereby reducing the computational cost of the overall process and improving the reliability of the final disparity map. Results show that the proposed technique produces a smoother disparity map with less mismatch errors compared to applying the same method in both spatial and wavelet domains. The proposed algorithm fares very well when compared to other state of art techniques from the Middlebury database. © 2011 EURASIP.

Journal article
An investigation into Unmanned Aerial System (UAS) forensics: Data extraction & analysis
Featured 11 April 2022 Forensic Science International: Digital Investigation41:301379 Elsevier BV
AuthorsThornton G, Bagheri Zadeh P

Recent developments of drone technologies have shown a surge of commercial sales of drone devices, which have found use in many industries. However, the technology has been misused to commit crimes such as drug trafficking, robberies, and terror attacks. The digital forensics industry must match the speed of development with forensic tools and techniques. However, it has been identified that there is a lack of an agreed framework for the extraction and analysis of drone devices and a lack of support in commercial digital forensics tools available. In this research, an investigation into the extraction tools available for drone devices and analysis techniques has been performed to identify best practices for handling drone devices in a forensically sound manner. A new framework to perform a full forensic analysis of small to medium sized commercial drone devices and their controllers has been proposed to give investigators a plan of action to perform forensic analysis on these devices. The proposed framework overcomes some limitations of other drone forensics investigation frameworks presented in the literature.

Conference Proceeding (with ISSN)

Stereo Correspondence Matching: Balanced Multiwavelets versus unbalanced Multiwavelets

Featured 02 August 2010 The 2010 European Signal Processing Conference (EUSIPCO-2010) European Signal Processing Conference Denmark
AuthorsBagheri Zadeh P, Serdean C

This paper investigates the efficiency of unbalanced versus balanced multiwavelets in stereo correspondence matching. A multiwavelet transform is first applied to a pair of stereo images to decorrelate them into a number of subbands. Information in the approximation subbands of an unbalanced multiwavelet carries different spectral content of the input image while the balanced multiwavelet approximation subbands produce similar spectral content of the input image. Hence, the application of the approximation subbands of the unbalanced multiwavelets in disparity map generation could produce more accurate results compared to that of balanced multiwavelets. A global error energy minimization technique is employed to generate a disparity map for each approximation subband. The information in the resulting disparity maps is then combined using a Fuzzy algorithm to generate a dense disparity map. Simulation results show that the unbalanced multiwavelets produce a smoother disparity map with less mismatch errors compared to that of balanced multiwavelets. © EURASIP, 2010.

Conference Proceeding (with ISSN)

Stereo Video Disparity Estimation Using Multi-wavelets

Featured 04 May 2012 International Conference on Digital Telecommunications 2012 (ICDT2012)
AuthorsBagheri Zadeh P, Serdean C
Conference Proceeding (with ISSN)

A Novel Tap Selection Design for Filters in Unequal-Passbands Scheme

Featured 17 February 2016 International Conference on Digital Telecommunications (ICDT2016) ICDT 2016 - The Eleventh International Conference on Digital Telecommunications
AuthorsBadran S, Ahmadi S, Bagheri Zadeh P, Shahin I
Journal article

Multiwavelets in the Context of Hierarchical Stereo Correspondence Matching Techniques

Featured 30 September 2011 International Journal on Advances in Telecommunications
AuthorsBagheri Zadeh P, Serdean C
Conference Contribution

Root cause analysis (RCA) as a preliminary tool into the investigation of identity theft

Featured 07 July 2016 IEEE 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) London IEEE
AuthorsAbubakr A, Bagheri Zadeh P, Helge J, Holley R

Identity theft has been known for some centuries whereby falsified identity documents were misused as well as offences such as impersonating others were common in the society. However, the advent of technology changed the method used for conducting this crime, whereby through the use of the Internet, personal information is can be stolen and misused by criminals. The crime has its causes originating from human error and judgement to failure of computing and networking systems that allow unauthorized access to personal information. In order to provide a better tool of investigating this crime, there is the need to explore the causes of the crime thereby providing a better framework for investigating Identity theft crimes. This study uses Root Cause Analysis (RCA) as a preliminary tool that serves to provide a depicted identification of the causes of Identity theft paving the way into investigating the crime and creating incident response plans.

Journal article
A New Approach to Classify Drones Using a Deep Convolutional Neural Network
Featured 12 July 2024 Drones8(7):1-28 MDPI AG
AuthorsRakshit H, Bagheri Zadeh P

In recent years, the widespread adaptation of Unmanned Aerial Vehicles (UAVs), commonly known as drones, among the public has led to significant security concerns, prompting intense research into drones’ classification methodologies. The swift and accurate classification of drones poses a considerable challenge due to their diminutive size and rapid movements. To address this challenge, this paper introduces (i) a novel drone classification approach utilizing deep convolution and deep transfer learning techniques. The model incorporates bypass connections and Leaky ReLU activation functions to mitigate the ‘vanishing gradient problem’ and the ‘dying ReLU problem’, respectively, associated with deep networks and is trained on a diverse dataset. This study employs (ii) a custom dataset comprising both audio and visual data of drones as well as analogous objects like an airplane, birds, a helicopter, etc., to enhance classification accuracy. The integration of audio–visual information facilitates more precise drone classification. Furthermore, (iii) a new Finite Impulse Response (FIR) low-pass filter is proposed to convert audio signals into spectrogram images, reducing susceptibility to noise and interference. The proposed model signifies a transformative advancement in convolutional neural networks’ design, illustrating the compatibility of efficacy and efficiency without compromising on complexity and learnable properties. A notable performance was demonstrated by the proposed model, with an accuracy of 100% achieved on the test images using only four million learnable parameters. In contrast, the Resnet50 and Inception-V3 models exhibit 90% accuracy each on the same test set, despite the employment of 23.50 million and 21.80 million learnable parameters, respectively.

Conference Proceeding (with ISSN)
A New Cosine Hyperbolic Window Function-based FIR Filter design for Audio to Spectrogram Conversion
Featured 18 September 2024 International Conference on Imaging, Signal Processing and Communications (ICISPC 2024) 2024 8th International Conference on Imaging, Signal Processing and Communications (ICISPC) Fukuoka, Japan IEEE Xplore
AuthorsRakshit H, Bagheri Zadeh P

In recent years, deep learning-based audio signal processing is a popular way to extract features from audio signals and make the system learnt about those extracted features and patterns. These features are used for speech recognition, tracking vehicles and different types of audio processing. In many cases, to extract salient features and make the system learnt about those features, conversion of audio signal to spectrogram is a vital step. Spectrograms exhibiting minimum noise and interference, contribute significantly to feature extraction, thereby optimizing the efficiency of the learning system. In this paper, a novel adjustable window function, based on Cosine Hyperbolic Function, is proposed to design Finite Impulse Response (FIR) low-pass filter which can be utilized for reducing noise and interference from the spectrograms. The spectral characteristics of the proposed window function are compared with the state-of-the-art window functions. The performance of the Proposed window-based FIR low-pass filter is assessed with state-of-the-art FIR low-pass filters in terms of reducing noise and interference from spectrograms. Experimental result show that the proposed window-based FIR low-pass filter outperforms the existing methods to eliminate noise and interference from audio to spectrogram conversion.

Journal article
A Novel Approach to Detect Drones Using Deep Convolutional Neural Network Architecture
Featured 13 July 2024 Sensors24(14):1-25 MDPI AG
AuthorsRakshit H, Bagheri Zadeh P

Over the past decades, drones have become more attainable by the public due to their widespread availability at affordable prices. Nevertheless, this situation sparks serious concerns in both the cyber and physical security domains, as drones can be employed for malicious activities with public safety threats. However, detecting drones instantly and efficiently is a very difficult task due to their tiny size and swift flights. This paper presents a novel drone detection method using deep convolutional learning and deep transfer learning. The proposed algorithm employs a new feature extraction network, which is added to the modified YOU ONLY LOOK ONCE version2 (YOLOv2) network. The feature extraction model uses bypass connections to learn features from the training sets and solves the “vanishing gradient” problem caused by the increasing depth of the network. The structure of YOLOv2 is modified by replacing the rectified linear unit (relu) with a leaky-relu activation function and adding an extra convolutional layer with a stride of 2 to improve the small object detection accuracy. Using leaky-relu solves the “dying relu” problem. The additional convolution layer with a stride of 2 reduces the spatial dimensions of the feature maps and helps the network to focus on larger contextual information while still preserving the ability to detect small objects. The model is trained with a custom dataset that contains various types of drones, airplanes, birds, and helicopters under various weather conditions. The proposed model demonstrates a notable performance, achieving an accuracy of 77% on the test images with only 5 million learnable parameters in contrast to the Darknet53 + YOLOv3 model, which exhibits a 54% accuracy on the same test set despite employing 62 million learnable parameters.

Journal article
DCT image codec using variance of sub-regions
Featured 11 August 2015 DE GRUYTER OPEN: Open Computer Science5(1):13-21 Walter de Gruyter GmbH

This paper presents a novel variance of subregions and discrete cosine transform based image-coding scheme. The proposed encoder divides the input image into a number of non-overlapping blocks. The coefficients in each block are then transformed into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency index at different blocks are put together generating a number of matrices, where each matrix contains coefficients of a particular spatial frequency index. The matrix containing DC coefficients is losslessly coded to preserve its visually important information. Matrices containing high frequency coefficients are coded using a variance of sub-regions based encoding algorithm proposed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. An extension of the system to the progressive image transmission is also developed. The proposed coding scheme, JPEG and JPEG2000were applied to a number of test images. Results show that the proposed coding scheme outperforms JPEG and JPEG2000 subjectively and objectively at low compression ratios. Results also indicate that the proposed codec decoded images exhibit superior subjective quality at high compression ratios compared to that of JPEG, while offering satisfactory results to that of JPEG2000.

Journal article

Multiresolution HVS and statistically based image coding scheme

Featured August 2010 Multimedia Tools and Applications49(2):347-370 Springer Science and Business Media LLC
AuthorsBagheri Zadeh P, Sheikh Akbari A, Buggy T, Soraghan J

In this paper a novel multiresolution human visual system and statistically based image coding scheme is presented. It decorrelates the input image into a number of subbands using a lifting based wavelet transform. The codec employs a novel statistical encoding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the statistical encoding process. The baseband coefficients are losslessly coded. An extension of the codec to the progressive transmission of images is also developed. To evaluate the performance of the coding scheme, it was applied to a number of test images and its performance with and without perceptual weights is evaluated. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when perceptual weights are employed. The performance of the proposed technique was also compared to JPEG and JPEG2000. The results show that the proposed coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios. © Springer Science + Business Media, LLC 2009.

Journal article

Statistical, DCT and vector quantisation-based video codec

Featured 16 June 2008 IET Image Processing2(3):107-115 Institution of Engineering and Technology (IET)

The authors present a novel hybrid statistical, DCT and vector quantisation-based video-coding technique. In intra mode of operation, an input frame is divided into a number of non-overlapping pixel blocks. A discrete cosine transform then converts the coefficients in each block into the frequency domain. Coefficients with the same frequency index at different blocks are put together generating a number of matrices, where each matrix contains the coefficients of a particular frequency index. The matrix, which contains the DC coefficients, is losslessly coded. Matrices containing high frequency coefficients are coded using a novel statistical encoder. In inter mode of operation, overlapped block motion estimation / compensation is employed to exploit temporal redundancy between successive frames and generates a displaced frame difference (DFD) for each inter-frame. A wavelet transform then decomposes the DFD-frame into its frequency subbands. Coefficients in the detail subbands are vector quantised while coefficients in the baseband are losslessly coded. To evaluate the performance of the codec, the proposed codec and the adaptive subband vector quantisation (ASVQ) video codec, which has been shown to outperform H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms the ASVQ video codec subjectively and objectively at all bitrates. © 2008 The Institution of Engineering and Technology.

Conference Proceeding (with ISSN)

Wavelet based image enlargement technique

Featured 04 September 2015 10th International Conference on Global Security, Safety & Sustainability Communications in Computer and Information Science London, England Springer Verlag (Germany)

This paper presents an image enlargement technique using a wavelet transform. The proposed technique considers the low resolution input image as the wavelet baseband and estimates the information in high-frequency subbands from the wavelet high-frequency sub-bands of the input image using wavelet filters. The super-resolution image is finally generated by applying an inverse wavelet transform on the high resolution sub-bands. To evaluate the performance of the proposed image enlargement technique, five standard test images with a variety of frequency components were chosen and enlarged using the proposed technique and six state of the art algorithms. Experimental results show the proposed technique significantly outperforms the classical and nonclassical super-resolution methods, both subjectively and objectively.

Conference Proceeding (with ISSN)
HEVC based Stereo Video codec
Featured 01 December 2015 The 2nd IET International Conference on Intelligent Signal Processing, IET Proceedings of the 2nd IET International Conference on Intelligent Signal Processing, IET IET London: Savoy Place, UK IET

Development of stereo video codecs in latest multi-view extension of HEVC (MV-HEVC) with higher compression efficiency has been an active area of research. In this paper, a frame interleaved stereo video coding scheme based on MVHEVC standard codec is proposed. The proposed codec applies a reduced layer approach to encode the frame interleaved stereo sequences. A frame interleaving algorithm is developed to reorder the stereo video frames into a monocular video, such that the proposed codec can gain advantage from inter-views and temporal correlations to improve its coding performance. To evaluate the performance of the proposed codec; three standard multi-view test video sequences, named “Poznan_Street”, “Kendo” and “Newspaper1”, were selected and coded using the proposed codec and the standard MV-HEVC codec at different QPs and bitrates. Experimental results show that the proposed codec gives a significantly higher coding performance to that of the standard MV-HEVC codec at all bitrates.

Conference Proceeding (with ISSN)
Evaluation of Wavelet Transform Families in Image Resolution Enhancement
Featured 01 December 2015 The 2nd IET International Conference on Intelligent Signal Processing Proceedings of the 2nd IET International Conference on Intelligent Signal Processing IET London: Savoy Place, UK IET

Evaluation of Wavelet Transform Families in Image Resolution Enhancement

Journal article

Multiresolution, perceptual and vector quantization based video codec

Featured June 2012 Multimedia Tools and Applications58(3):569-583 Springer Science and Business Media LLC
AuthorsSheikh Akbari A, Bagheri Zadeh P, Buggy T, Soraghan J

This paper presents a novel Multiresolution, Perceptual and Vector Quantization (MPVQ) based video coding scheme. In the intra-frame mode of operation, a wavelet transform is applied to the input frame and decorrelates it into its frequency subbands. The coefficients in each detail subband are pixel quantized using a uniform quantization factor divided by the perceptual weighting factor of that subband. The quantized coefficients are finally coded using a quadtree-coding algorithm. Perceptual weights are specifically calculated for the centre of each detail subband. In the inter-frame mode of operation, a Displaced Frame Difference (DFD) is first generated using an overlapped block motion estimation/compensation technique. A wavelet transform is then applied on the DFD and converts it into its frequency subbands. The detail subbands are finally vector quantized using an Adaptive Vector Quantization (AVQ) scheme. To evaluate the performance of the proposed codec, the proposed codec and the adaptive subband vector quantization coding scheme (ASVQ), which has been shown to outperform H.263 at all bitrates, were applied to six test sequences. Experimental results indicate that the proposed codec outperforms the ASVQ subjectively and objectively at all bit rates. © 2011 Springer Science+Business Media, LLC.

Conference Proceeding (with ISSN)

Progressive multiresolution perceptual and statistically based image codec

Featured 01 December 2006 Proceedings of the 6th IASTED International Conference on Visualization Imaging and Image Processing Viip 2006
AuthorsBagheri Zadeh P, Buggy T, Sheikh Akbari A, Soraghan JJ

This paper presents a progressive multiresolution human visual system and statistically based image-coding scheme. The proposed coding scheme decorrelates the input image into a number of subbands using a lifting based wavelet transform and employs a novel statistically-based coding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the coding process. The baseband coefficients are losslessly coded. The coded subbands are used for progressive image transmission. To evaluate the performance of the coding scheme, it was applied to a number of test images with and without perceptual weights. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when the perceptual weights are employed. The performance of the new progressive image codec was also compared to JPEG and JPEG2000. The results show that the proposed computationally efficient coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios. The application of the codec to progressive image transmission is also investigated on a series of test images.

Conference Proceeding (with ISSN)

A novel statistical and DCT based image encoder

Featured 01 December 2007 Proceedings of the 4th IASTED International Conference on Signal Processing Pattern Recognition and Applications Sppra 2007
AuthorsBagheri Zadeh P, Buggy T, Sheikh Akbari A, Soraghan JJ

This paper presents a novel statistical and discrete cosine transform (DCT) based image-coding scheme. The proposed coding scheme divides the input image into a number of non-overlapping pixel blocks. The coefficients in each block are then decorrelated into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency at different blocks are put together to generate a number of matrices, where each matrix contains coefficients of a particular spatial frequency. The matrix containing DC coefficients is losslessly coded to preserve visually important information. Matrices, which consist of high frequency coefficients, are coded using a novel statistically based coding algorithm developed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. The proposed coding scheme and JPEG were applied to Lena, Elaine and House, three test images, and results show that the proposed coding scheme outperforms JPEG subjectively and objectively at low compression ratios. Results also indicate that the decoded images using the proposed codec have superior subjective quality at high compression ratios while JPEG suffers from blocking artifacts at high compression ratios.

Journal article

Compressive sampling and wavelet-based multi-view image compression scheme

Featured 25 October 2012 Electronics Letters48(22):1403-1404 Institution of Engineering and Technology (IET)

A multi-view image codec using a disparity compensated lifting based wavelet transform and 'compressive sampling (CS)' is presented. The input images are de-correlated into their sub-bands, using disparity compensated view filtering lifting based wavelet transforms. A wavelet transform is then applied to the baseband view, de-composing it into its sub-bands. High-frequency sub-bands are separately hard threshold. Wavelet-weights for high-frequency sub-bands are calculated and used to adjust threshold values for different sub-bands. The CS algorithm is then employed to generate measurements for each resulting sub-band. In the decoder side, the Basis Pursuit method is used to recover the high-frequency sub-bands. Results indicate that the proposed codec significantly outperforms the state-of-the-art CS-based multi-view image codecs. © 2012 The Institution of Engineering and Technology.

Conference Proceeding (with ISSN)

A Progressive Statistical and Discrete Cosine Transform based Image codec

Featured 31 January 2008 The 3rd International Conference on Computer Vision Theory and Applications (VISAPP2008) Portugal
Conference Proceeding (with ISSN)

A Novel Unequal Error Protection Scheme for Low Bit-Rate Mobile Video Transmission

Featured 31 August 2005 Ninth Irish Machine Vision & Image Processing Conference 2005 (IMVIP 2005)
Conference Proceeding (with ISSN)

Progressive Multi-resolution HVS and Statistically Based Image Codec

Featured 31 August 2006 Sixth International Conference On Visualization, Imaging, and Image Processing (VIIP2006)
AuthorsBagheri Zadeh P, Sheikh Akbari A, Buggy T, Soraghan J
Conference Proceeding (with ISSN)

A Novel Progressive Image Coding Scheme for Handheld Videophone Applications

Featured 31 August 2005 The Irish Machine Vision & Image Processing Conference 2005 (IMVIP 2005)
AuthorsBagheri Zadeh P, Sheikh Akbari A, Cochran E, Soraghan J
Conference Contribution

Physics-guided Synthetic CFD Data Generation and Explainable Deep Learning Methods for Automated Flow Pattern Classification

Featured 04 November 2025 The 2nd International Conference on Aeronautical Sciences, Engineering and Technology (ICASET 2025) Military Technological College, Muscat, Sultanate of Oman

Computational Fluid Dynamics (CFD) is widely used to analyze fluid flow patterns, but interpreting these patterns manually is time-consuming and requires expert knowledge. This paper introduces a method that combines physics-guided synthetic CFD data generation with explainable deep learning models to automate flow pattern classification. The approach involves generating synthetic data using physics-based simulations and training deep learning models—specifically Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs)—to classify flow patterns. Explainable AI techniques are applied to interpret the model decisions. The results show that Vision Transformers outperform CNNs in classification accuracy and offer better interpretability.

Conference Proceeding (with ISSN)
Iris segmentation using a non-decimated wavelet transform
Featured 01 December 2015 The 2nd IET International Conference on Intelligent Signal Processing, IET Proceedings of the 2nd IET International Conference on Intelligent Signal Processing, IET IET London: Savoy Place, UK IET

This paper presents an iris segmentation algorithm. The proposed technique applies a histogram based method on the input eye image extracting a point within the pupil. The image is then intensity sampled over M equiangular radial scan line, generating M 1-dimensional signals. A Fuzzy multi-scale edge detection algorithm is then applied to each of the resulting radii signals, to accurately detect and locate one positive edge point from the signal. A uniform cubic B-spline approximation method is further applied to the detected edges determining the iris outer boundary. The histogram of the area within the extracted outer iris bondary of the eye image is finaly used to extract the pupil outer bondary. Experimental results on a number of eye test images taken under visible wavelenght from UBIRISv.1 and UBIRISv.2 databases show that the proposed segmentation method accurately extracts the iris boundaries.

Journal article
A New AI Framework to Support Social-Emotional Skills and Emotion Awareness in Children with Autism Spectrum Disorder
Featured 20 July 2025 Computers14(7):1-18 MDPI AG

This research highlights the importance of Emotion Aware Technologies (EAT) and their implementation in serious games to assist children with Autism Spectrum Disorder (ASD) in developing social-emotional skills. As AI is gaining popularity, such tools can be used in mobile applications as invaluable teaching tools. In this paper, a new AI framework application is discussed that will help children with ASD develop efficient social-emotional skills. It uses the Jetpack Compose framework and Google Cloud Vision API as emotion-aware technology. The framework is developed with two main features designed to help children reflect on their emotions, internalise them, and train them how to express these emotions. Each activity is based on similar features from literature with enhanced functionalities. A diary feature allows children to take pictures of themselves, and the application categorises their facial expressions, saving the picture in the appropriate space. The three-level minigame consists of a series of prompts depicting a specific emotion that children have to match. The results of the framework offer a good starting point for similar applications to be developed further, especially by training custom models to be used with ML Kit.

Journal article
HEVC Based Frame Interleaved Coding Technique for Stereo and Multi-View Videos
Featured 25 November 2022 Information13(12):554 MDPI
AuthorsMallik B, Sheikh Akbari A, Bagheri Zadeh P, Al-Majeed S

The standard HEVC codec and its extension for coding multiview videos, known as MV-HEVC, have proven to deliver improved visual quality compared to its predecessor, H.264/MPEG-4 AVC’s multiview extension, H.264-MVC, for the same frame resolution with up to 50% bitrate savings. MV-HEVC’s framework is similar to that of H.264-MVC, which uses a multi-layer coding approach. Hence, MV-HEVC would require all frames from other reference layers decoded prior to decoding a new layer. Thus, the multi-layer coding architecture would be a bottleneck when it comes to quicker frame streaming across different views. In this paper, an HEVC-based Frame Interleaved Stereo/Multiview Video Codec (HEVC-FISMVC) that uses a single layer encoding approach to encode stereo and multiview video sequences is presented. The frames of stereo or multiview video sequences are interleaved in such a way that encoding the resulting monoscopic video stream would maximize the exploitation of temporal, inter-view, and cross-view correlations and thus improving the overall coding efficiency. The coding performance of the proposed HEVC-FISMVC codec is assessed and compared with that of the standard MV-HEVC’s performance for three standard multi-view video sequences, namely: “Poznan_Street”, “Kendo” and “Newspaper1”. Experimental results show that the proposed codec provides more substantial coding gains than the anchor MV-HEVC for coding both stereo and multi-view video sequences.

Conference Proceeding (with ISSN)

Stereo Correspondence Matching Using Multiwavelets

Featured June 2010 2010 Fifth International Conference on Digital Telecommunications (ICDT) 2010 Fifth International Conference on Digital Telecommunications IEEE
AuthorsZadeh PB, Serdean CV

this paper presents a novel multiwavelet-based stereo correspondence matching technique. A multiwavelet transform is first applied to a pair of stereo images to decorrelate the images into a number of approximation (baseband) and detail subbands. Information in the basebands is less sensitive to shift variability of the multiwavelet transform. Basebands of each input image carry different spectral content of the image. Therefore, using the basebands to generate the disparity map is likely to produce more accurate results. A global error energy minimization technique is employed to generate a disparity map for each baseband of the stereo pairs. Information in the resulting disparity maps is then combined using a Fuzzy algorithm to construct a dense disparity map. A filtering process is finally applied to smooth the disparity map and reduce its erroneous matches. Middlebury stereo test images are used to generate experimental results. Results show that the proposed technique produces smoother disparity maps with less mismatch errors compared to applying the same global error energy minimization technique to wavelet transformed image data. © 2010 IEEE.

Conference Proceeding (with ISSN)

Forensic analysis of private browsing

Featured June 2016 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) IEEE
AuthorsGeddes M, Zadeh PB

Private browsing is popular for many users who wish to keep their internet usage hidden from other users on the same computer. This research examines what artefacts are left on the users' computer using digital forensic tools. The results from this research help inform recommendations for forensic analysts on ways to analyse private browsing artefacts.

Conference Proceeding (with ISSN)

A mobile forensic investigation into steganography

Featured June 2016 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) 2016 International Conference On Cyber Security And Protection Of Digital Services (Cyber Security) IEEE
AuthorsBurrows C, Zadeh PB

Mobile devices are becoming a more popular tool to use in day to day life; this means that they can accumulate a sizeable amount of information, which can be used as evidence if the device is involved in a crime. Steganography is one way to conceal data, as it obscures the data as well as concealing that there is hidden content. This paper will investigate different steganography techniques, steganography artefacts created and the forensic investigation tools used in detecting and extracting steganography in mobile devices. A number of steganography techniques will be used to generate different artefacts on two main mobile device platforms, Android and Apple. Furthermore Forensic investigation tools will be employed to detect and possibly reveal the hidden data. Finally a set of mobile forensic investigation policy and guidelines will be developed.

Conference Proceeding (with ISSN)

Multi-scale, Perceptual and Vector Quanitzation Based Video Codec

Featured July 2007 Second International Conference on Digital Telecommunications, ICDT 2007 2007 Second International Conference on Digital Telecommunications (ICDT'07) IEEE
AuthorsZadeh PB, Buggy T, Akbari AS

This paper presents a novel hybrid Multi-scale, perceptual and vector quantization based video coding scheme. In intra mode of operation, a wavelet transform is applied to the input frame and decorrelate it into a number of subbands. The lowest frequency subband is losslessly coded. The coefficient of the high frequency subbands are pixel quantized using perceptual weights, which specifically designed for each high frequency subband. The quantized coefficients are then coded using quadtree-coding scheme. In the inter mode of operation, displaced frame difference is generated using overlapped block motion estimation / compensation to exploit the inter-frame redundancy. A wavelet transform is then applied to the displaced frame difference to decorrelate it into a number of subbands. The coefficients in the resulting subbands are coded using an adaptive vector quantization scheme. To evaluate the performance of the proposed codec, the proposed codec and the adaptive subband vector quantization coding scheme (ASVQ), which has been shown outperforms H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms ASVQ subjectively and objectively at all bit rates. © 2007 IEEE.

Conference Proceeding (with ISSN)

Image resolution enhancement using multi-wavelet and cycle-spinning

Featured September 2012 2012 UKACC International Conference on Control (CONTROL) Proceedings of 2012 UKACC International Conference on Control IEEE
AuthorsZadeh PB, Akbari AS

In this paper a multi-wavelet and cycle-spinning based image resolution enhancement technique is presented. The proposed technique generates a high-resolution image for the input low-resolution image using the input image and an inverse multi-wavelet transform (all multi-wavelet high frequency subbands' coefficients are set to zero). The concept of the cycle spinning algorithm in conjunction with the multi-wavelet transform is then used to generate a high quality super-resolution image for the input image from the resulting high resolution image, as follows: A number of replicated images with different spatial shifts from the resulting high-resolution image is first generated; Each of the replicated images is de-correlated into its subbands using a multi-wavelet transform; The multi-wavelet high frequency subbands' coefficients of each of the de-correlated images are set to zero and then a primary super-resolution image for each of these images is produced using an inverse multi-wavelet transform; The resulting primary super-resolution images are then spatially shift compensated and the output super-resolution image is created by averaging the resulting shift compensated images. Experimental results were generated using four standard test images and compared to the state of art techniques. Results show that the proposed technique significantly outperforms the classical and non-classical super-resolution methods both subjectively and objectively. © 2012 IEEE.

Conference Proceeding (with ISSN)

A Digital Forensics Live Suspicious Activity Toolkit To Assist Investigators With Sexual Harm Prevention Order Monitoring

Featured 22 June 2022 2022 IEEE Conference on Dependable and Secure Computing (DSC) 2022 IEEE Conference on Dependable and Secure Computing (DSC) IEEE
AuthorsScholey A, Zadeh PB

The National Society for the Prevention of Cruelty to Children (NSPCC) and the Internet Watch Foundation (IWF) report a growing number of child sexual abuse material within the UK, substantiated by the National Crime Agency (NCA). This paper will investigate the increasing burden, and time-consuming task placed upon police forces, by the required regular examination of digital devices, belonging to sentenced peadophiles and individuals, bound by a Sexual Harm Prevention Order (SHPO). By examining some of the motivations behind offenders and their desire to habitually offend, and by using the most common traits amongst them, indicators of suspicious behaviour will emerge. In this paper, a proof-of-concept digital forensic investigation toolkit is proposed to assist Public Protection Officers (PPO) in the analysis of digital devices belonging to these individuals. The proposed Live Suspicious Activity Toolkit (LiSA - T) will enable a time-efficient, up to date assessment of any suspicious activity and behaviour on a Windows 10 computer. By using specific modules that can be turned on and off, updated and have unique preferences assigned to them, LiSA-T will evaluate and then report the findings, assisting the PPO with an informed decision as to involve the Digital Forensic Unit (DFU), to further examine a device in a more in-depth forensic manner. The test results, demonstrated that the proposed LiSA- T techniques, showed low computational cost to successfully detect the targeted evidential artifacts for the defined suspicious activity.

Journal article
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
Featured 31 August 2025 Electronics14(15):1-26 MDPI AG
AuthorsAlam KN, Zadeh PB, Sheikh-Akbari A

One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting genetic analysis for personalized medicine. However, a critical drawback of using Computer Vision (CV) approaches is their limited reliability and transparency. Clinicians and patients must comprehend the rationale behind predictions or results to ensure trust and ethical deployment in clinical settings. This demonstrates the adoption of the idea of Explainable Computer Vision (X-CV), which enhances vision-relative interpretability. Among various methodologies, attribution-based approaches are widely employed by researchers to explain medical imaging outputs by identifying influential features. This article solely aims to explore how attribution-based X-CV methods work in medical imaging, what they are good for in real-world use, and what their main limitations are. This study evaluates X-CV techniques by conducting a thorough review of relevant reports, peer-reviewed journals, and methodological approaches to obtain an adequate understanding of attribution-based approaches. It explores how these techniques tackle computational complexity issues, improve diagnostic accuracy and aid clinical decision-making processes. This article intends to present a path that generalizes the concept of trustworthiness towards AI-based healthcare solutions.

Conference Proceeding (with ISSN)

Multiresolution statistical and vector quantization based video codec

Featured 01 December 2008 Proceedings of the 5th IASTED International Conference on Signal Processing Pattern Recognition and Applications Sppra 2008
AuthorsZadeh PB, Buggy T, Akbari AS

This paper presents a novel hybrid multiresolution statistical and vector quantization based video coding scheme. In the intra mode of operation, a wavelet transform is used to decorrelate the input frame into a number of subbands. The high frequency subbands are coded using a novel statistically based coding algorithm. In the inter mode of operation, overlapped block motion estimation/compensation is employed to exploit interframe redundancy. A wavelet transform is then applied to the displaced frame difference to decorrelate it into a number of subbands. The coefficients in the resulting subbands are coded using an adaptive vector quantization scheme. To evaluate the performance of the proposed codec, the proposed codec and the adaptive subband vector quantization coding scheme (ASVQ), which has been shown outperforms H.263 at all bitrates, were applied to a number of test sequences. Results indicate that the proposed codec outperforms ASVQ subjectively and objectively at all bit rates.

Conference Proceeding (with ISSN)

PROGRESSIVE DCT BASED IMAGE CODEC USING STATISTICAL PARAMETERS

Featured 2008 International Conference on Computer Vision Theory and Applications Proceedings of the Third International Conference on Computer Vision Theory and Applications SciTePress - Science and and Technology Publications
AuthorsZadeh PB, Buggy T, Akbari AS

This paper presents a novel progressive statistical and discrete cosine transform based image-coding scheme. The proposed coding scheme divides the input image into a number of non-overlapping pixel blocks. The coefficients in each block are then decorrelated into their spatial frequencies using a discrete cosine transform. Coefficients with the same spatial frequency at different blocks are put together to generate a number of matrices, where each matrix contains coefficients of a particular spatial frequency. The matrix containing DC coefficients is losslessly coded to preserve visually important information. Matrices, which consist of high frequency coefficients, are coded using a novel statistical encoder developed in this paper. Perceptual weights are used to regulate the threshold value required in the coding process of the high frequency matrices. The coded matrices generate a number of bitstreams, which are used for progressive image transmission. The proposed coding scheme, JPEG and JPEG2000 were applied to a number of test images. Results show that the proposed coding scheme outperforms JPEG and JPEG2000 subjectively and objectively at low compression ratios. Results also indicate that the decoded images using the proposed codec have superior subjective quality at high compression ratios compared to that of JPEG, while offering comparable results to that of JPEG2000.

Conference Proceeding (with ISSN)

Stereo image representation using compressive sensing

Featured May 2011 2011 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2011) 2011 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) IEEE
AuthorsAkbari AS, Zadeh PB, Moniri M

This paper presents a compressive sensing based stereo image representation technique using wavelet transform gain. The pair of input stereo images is first decomposed into its low-pass and high-pass views using a motion compensated lifting based wavelet transform. A 2D spatial wavelet transform is then further de-correlates the low-pass view into its sub-bands. Wavelet transform gains are employed to regulate threshold value for different sub-bands. The coefficients in high frequency sub-bands and high-pass view are then hard thresholded to generate their sparse sub-bands and view. The compressive sensing method is then used to generate measurements for different resulting sparse sub-bands and view. The baseband coefficients and measurements are finally losslessly coded. The application of compressive sensing in compressing natural images is in its early stages. Therefore, their performances are usually compared with each other than standard codecs. The performance of the proposed codec is superior to the state of the art and is superior to JPEG subjectively. © 2011 IEEE.

Conference Proceeding (with ISSN)

A Novel H.264/AVC Based Multi-View Video Coding Scheme

Featured May 2007 2007 3DTV Conference 2007 3DTV Conference IEEE
AuthorsAkbari AS, Canagarajah N, Redmill D, Bull D, Agrafiotis D

This paper investigates extensions of H.264/AVC for compressing multi-view video sequences. The proposed technique re-sorts frames of sequences captured by multiple cameras looking at a person in a scene from different views and generates a single video sequence. The multi-frame referencing property of the H.264/AVC, which enables exploitation of the spatial and temporal redundancy contained in the multi-view sequences, is employed to implement several modes of operation in the proposed coding algorithm. To evaluate the performance of the proposed coding technique at different modes of operations, five multi-view video sequences at different frame rates were coded using the proposed and the simulcast H.264/AVC coding schemes. Experiments show the superior performance of the proposed coding scheme when coding the multi-view sequences at low and up to half of the original frame rates. © 2007 IEEE.

Conference Proceeding (with ISSN)

Disparity compensated view filtering wavelet and compressive sampling based multi-view image codec

Featured 01 December 2013 Proceedings Elmar International Symposium Electronics in Marine
AuthorsAkbari AS, Zadeh PB

This paper presents a multi-view image codec using disparity compensated lifting based wavelet transform and Compressive Sampling (CS). Disparity compensated view filtering lifting based wavelet transforms are applied to the input multi-view images decomposing the images into their view sub-bands. The dense view is further decomposed into its spatial sub-bands using a wavelet transform. High frequency coefficients are hard threshold to improve and also to control their sparsity. For high frequency sub-bands/views, wavelet-weights are calculated and used to regulate threshold values for those sub-bands/views. The CS algorithm is then used to generate measurements for each resulting sparse sub-band. In the decoder side, the Basis Pursuit method is used to recover the dominant coefficients. An assessment on the energy of the non-dominant coefficients at different compression ratios and their effect on the quality of the reconstructed images are given. Results show that the proposed codec out performs the state of art codecs. © 2013 Croatian Society Electronics in Marine - ELMAR.

Conference Proceeding (with ISSN)

Wavelet-based video codec using human visual system coefficients for 3G mobiles

Featured 03 April 2015 European Signal Processing Conference
AuthorsAkbari AS, Zadeh PB, Cocharan E, Soraghan J

A new wavelet based video codec that uses human visual system coefficients is presented. In INTRA mode of operation, wavelet transform is used to split the input frame into a number of subbands. Human Visual system coefficients are designed for handheld videophone devices and used to regulate the quantization step-size in the pixel quantization of the high frequency subbands' coefficients. The quantized coefficients are coded using quadtree-coding scheme. In the INTER mode of operation, the displaced frame difference is generated and a wavelet transform decorrelates it into a number of subbands. These subbands are coded using adaptive vector quantization scheme. Results indicate a significant improvement in frame quality compared to motion JPEG2000.

Conference Proceeding (with ISSN)

A Novel Multiresolution Perceptual and Statistically Based Image Coding Scheme

Featured 29 August 2006 International Conference on Digital Telecommunications (ICDT'06) International Conference on Digital Telecommunications (ICDT'06) IEEE
AuthorsZadeh PB, Buggy T, Soraghan JJ, Sheikh Akbari A

In this paper a new hybrid multiresolution Human Visual System and statistically based image coding scheme is presented. It decorrelates the input image into a number of subbands using a lifting based wavelet transform and employs a novel statistically based coding algorithm to code the coefficients in the detail subbands. Perceptual weights are applied to regulate the threshold value of each detail subband that is required in the coding process. The baseband coefficients are losslessly coded. To evaluate the performance of the coding scheme, it was applied to a number of test images with and without perceptual weights. The results indicate significant improvement in both subjective and objective quality of the reconstructed images when the perceptual weights are employed. The performance of the proposed technique was also compared to JPEG and JPEG2000. The results show that the proposed computationally efficient coding scheme outperforms both coding standards at low compression ratios, while offering satisfactory performance at higher compression ratios. © IEEE.

Conference Proceeding (with ISSN)
PCA in the context of Face Recognition with the Image Enlargement Techniques
Featured 01 June 2019 MECO 2019: The 8th Mediterranean Conference on Embedded Computing 2019 8th Mediterranean Conference on Embedded Computing, MECO 2019 - Proceedings Budva, Montenegro IEEE
AuthorsHalidu MK, Bagheri-Zadeh P, Sheikh-Akbari A, Behringer R

Face recognition has become a field of interest in many applications such as security and entertainments. In surveillance system, the quality of the recoded footage is sometimes insufficient due to the distance and angle of the camera from the scene. This causes the object of interest, e.g. the face of a person in the scene to be of low resolution, which increases the difficulty in recognition process. Image resolution enhancement is a potential solution for enlarging low-resolution images for real time face recognition. An enlarged image is then compared to available database of images to either identify or verify the individuals. However, the optimal performance of face recognition techniques when various image enlargement methods have been applied to them has not been investigated. In this research, the performance of PCA based face recognition method, with the three most well-known image enlargement techniques (Nearest Neighbour, Bilinear, Bicubic) is investigated. First, an input image is down sampled to six different resolutions. The down-sampled image is then enlarged to its original size using the three named image enlargement techniques. The enlarged image is then input to a PCA face recognition system for the recognition process. The simulation results using images from the SCFace database show that PCA based face recognition illustrates superior results when input images enlarged using Nearest Neighbour technique, while the performance of Bicubic and Bilinear techniques is slightly lower than Nearest Neighbour method.

Conference Proceeding (with ISSN)

Multi-resolution, perceptual and compressive sampling based image codec

Featured 2012 IET Conference on Image Processing (IPR 2012) IET Conference on Image Processing (IPR 2012) IET
AuthorsAkbari S, Ahmed KI, Zadeh PB, Moniri M

Direct application of compressive sampling in coding wavelet high frequency coefficients of an image, is unpleasantly deteriorating the quality of the reconstructed image. This is due to an error introduced by many high frequency coefficients that have small but nonzero values. In this paper, a novel multi-resolution image coding scheme using compressive sampling and perceptual weights is presented that significantly improves the quality of the reconstructed images by setting the coefficients with small values to zero using two different hard thresholding operators. The proposed codec applies a wavelet transform on the input image and decorrelates the image into its frequency subbands. Baseband coefficients are lossless coded to preserve their visually important information. High frequency subbands' coefficients are hard threshold to improve and also to control their sparsity. Perceptual-weights for different wavelet subbands are calculated and used to adjust threshold values for different subbands. Compressive sampling algorithm is used to generate measurements for each resulting sparse subband. Measurements for each subband are then cast to an integer and arithmetic coded. In the decoder side, the Basis Pursuit method is used to recover the coefficients. Empirical values for the observation factor for best coding performance of the codec, using standard test images, were first determined. The performance of the codec was assessed using standard test images. Results show that the application of perceptual weights in regulating threshold values significantly improves the coding performance of the codec.

Current teaching

  • Computing System (level 4)
  • Team Project (level 5)
  • Embedded Intelligent and Vision Systems (level 6)
  • Mobile Forensics Investigation (level 6)
  • Forensics Image Processing (level 7)
  • Final Year Project supervision (level 6)
  • MSc Project Supervision (level 7)
  • PhD supervision

{"nodes": [{"id": "21342","name": "Dr Pooneh Bagheri Zadeh","jobtitle": "Course Director","profileimage": "/-/media/images/staff/dr-pooneh-bagheri-zadeh.jpg","profilelink": "/staff/dr-pooneh-bagheri-zadeh/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "47","numberofcollaborations": "47"},{"id": "19660","name": "Dr Akbar Sheikh Akbari","jobtitle": "Reader","profileimage": "/-/media/images/staff/lbu-approved/beec/akbar-sheikh-akbari.jpg","profilelink": "/staff/dr-akbar-sheikh-akbari/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "141","numberofcollaborations": "30"},{"id": "10777","name": "Kiran Voderhobli","jobtitle": "School Director of Partnerships and Global Engagement","profileimage": "/-/media/images/staff/kiran-voderhobli.jpg","profilelink": "/staff/kiran-voderhobli/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "8","numberofcollaborations": "1"},{"id": "25335","name": "Andrew Scholey","jobtitle": "Lecturer","profileimage": "/-/media/images/staff/default.jpg","profilelink": "/staff/andrew-scholey/","department": "School of Built Environment, Engineering and Computing","numberofpublications": "1","numberofcollaborations": "1"}],"links": [{"source": "21342","target": "19660"},{"source": "21342","target": "10777"},{"source": "21342","target": "25335"}]}
Dr Pooneh Bagheri Zadeh
21342