Measurement of Magnetism in Composite Materials – Towards Novel Magnetic Materials
University of Pisa, Italy
Magnetic materials always received great attention by the scientific and industrial community for their central role in many fields of science and technology. The development of novel magnetic materials, including magnetic nanoparticles, thin magnetic films, composite materials opens novel present and future applications. Several known methods and techniques were employed to characterize magnetism in continuous media. To this purpose, we can mention the inductive methods, largely employed for the characterization of closed magnetic samples, as well as the use of magnetometers to characterize open magnetic samples.
This tutorial aims at revisiting the very basic concept of the measurement of magnetic behavior and magnetic permeability, as well as at providing a discussion towards possible novel magnetic materials, even with very unusual arbitrary B-H relationships. Starting from the spatial averaging of unobservable microscopic fields and the identification of the observable macroscopic fields as introduced by Lorentz, the measurement of the magnetic permeability in composite materials is discussed from its basic definition. Particular composite resonator structures are considered and it is shown how they can exhibit even negative magnetic permeabilities at industrial frequencies. The tutorial is organized as follows:
- Introduction to magnetism, magnetic materials and principal methods of characterization (30 min);
- Revisiting the concept of macroscopic field in continuous media, of magnetic permeability as well as of its measurement (15 min);
- Application of basic concepts for the measurements of the effective magnetic permeability in composite resonator structures (30 min);
- Short discussion towards novel magnetic materials and their characterization (15 min).
Bernardo Tellini was born in Pisa, Italy, on July 7, 1969. Currently, he is a Full Professor with the Department of Energy, Systems, Territory and Construction Engineering at the University of Pisa. Since 2010 to 2014, he was the Chair of European Pulsed Power Laboratories, a research network among European Research and Academic Institutions. In 2015, he was the General Chair of the International Instrumentation and Measurement Technology Conference. From 2014 to 2016 he was a member of the I2MTC Board of Directors. Recently, he has been elected as the Chair of the IEEE Italy Section. His main research interests include the measurements for electric and magnetic quantities, the characterization of magnetic materials and metamaterials, the pulsed power metrology with particular application to electromagnetic launchers, the measurements for railway systems, the characterization of aging process in Lithium battery cells.
Flicker of Artificial Light Sources in the Home
Chris Chitty & Susan Mander
Massey University, Auckland
Light flicker has been known to be an issue since the 1940s, when magnetic/inductive ballasts caused fluorescent tubes to flicker at twice the mains frequency. Complaints from office workers at the time included those of eyestrain, watering eyes, eye fatigue and headaches. In response to these adverse effects high frequency ballasts were developed, which drove the tubes at approximately 60 kHz and markedly reduced flicker.
In the last decade, retrofit LED lamps have become popular amongst domestic users. Products are not standardised in this growing industry, and are often functionally and aesthetically different to each other. This large variation has resulted in the manufacture of some sources that create high levels of flicker. Unlike linear fluorescent lighting, which is largely installed in a commercial environment under expert supervision, householders can now purchase their own LED replacement lamps off the shelf and install them with no lighting knowledge.
The tutorial aims to raise awareness of flicker from artificial light sources in the home, by demonstrating this in real time. Our research has shown that it is possible to produce LED lamps that have stable intensity. However, we have also uncovered many poor quality sources that could be creating issues for domestic consumers, their families and their pets.
The tutorial will begin with a brief history of flicker and its effects on humans and wildlife. We will then demonstrate individual light sources and combinations of components that are able to create varying levels of flicker. The tutorial will conclude with an opportunity for audience members to test different lighting samples using different devices. As Massey University is based in Auckland, the 2019 conference provides the perfect opportunity to bring our test and demonstration equipment to the tutorial, and provide a truly interactive hands-on talk.
The quality of domestic lighting affects everyone. This tutorial has therefore been prepared for a general audience and is not restricted to lighting specialists; neither is it restricted to a specialised lighting laboratory. We have chosen to use common household equipment to emphasise that problems with flicker can affect us all. We also aim to encourage the further development of systems and equipment to measure flicker.
Chris Chitt is a product development specialist at Massey University, Auckland. Over his varied career Chris has created all manner of devices including medical simulators, interactive museum displays and hyper-realistic robotic animals for motion pictures. He has also featured as “Doctor Robotech” in four series of the television show Let’s Get Inventin’, which develops concepts into products and brings the process alive for young and old. His current research focusses on the interactions between technology and biological systems.
Susan Mander has over 20 years’ experience in illumination engineering and presently leads the lighting program at Massey University. Susan has a strong industry focus, having worked previously as a senior electrical engineer for an international consultancy group. Susan’s research is in the performance of light-emitting diode (LED) technology when used as the primary source of illumination for residential, commercial and industrial applications.
Frequency Domain Sine-wave Parameter Estimation
Dominique Dallet & Daniel Belega
Engineering School Bordeaux INP & Politehnica University of Timişoara
This tutorial is focused on the application of the Interpolated Discrete Fourier Transform (IpDFT) algorithm to measurements. Firstly the IpDFT algorithm based on a generic cosine window is presented and its advantages are highlighted. The influence of both the spectral image of the fundamental component and wideband noise on IpDFT estimators is theoretically analyzed. In addition, the particular case in which the Maximum Sidelobe Decay (MSD) windows are adopted is considered. To reduce the contribution of the spectral image of the fundamental component on the IpDFT estimator, which is very important when the number of acquired sine-wave cycles is small, the multi-point IpDFT algorithms have been proposed. Two such algorithms with very high efficiency are then presented. They are the finite-difference based multi-point IpDFT algorithms and the three-point IpDFT algorithm which complete eliminate the above detrimental contribution. These algorithms are based on the MSD windows. The performances of these algorithms are theoretically compared with that of the classical IpDFT algorithm in the case of noisy sine-waves. Finally, the application of the IpDFT algorithms to the dynamic testing of analog-to-digital converters (ADCs) and the synchrophasor measurements in steady-state conditions is presented. In all the considered measurement fields theoretical, simulated, and experimental results will be provided.
Dominique Dallet is a Full Professor of digital electronic design at the engineering school Bordeaux INP, ENSEIRB-MATMECA. He received the Ph.D. degree in electrical engineering from the University of Bordeaux, Talence, France, in 1995.
He was the Head of the Electronic Embedded System Department from 2010 to 2013. He is the Head of the Electronic Design group of the IMS Laboratory. Since 2015, he is the chairperson of the Technical Committee “Measurement of Electrical Quantities” – IMEKO TC4. Dr. Dominique Dallet is doing his research at the IMS Laboratory (Laboratoire de l’Intégration du Matériau au Système), University of Bordeaux, Bordeaux INP, CNRS UMR 5218. His research activities are mainly focused on data converter (A/D – D/A) modelling and testing, parameter estimation, digital signal processing implementation in different targets (ASIC, FPGA), electronic design for the digital enhancement of analog and mixed electronic circuits (ADC, DAC, PA). He has authored over 200 papers in international and national journals or in proceedings of peer-reviewed international conferences, book chapters, and patents. He received as co-author the Istaván Kollár Award for the Best Paper Presented at the IMEKO-TC4 2016 Symposium, entitled “Accurate sine-wave frequency estimation by means of an interpolated DTFT algorithm”.
Daniel Belega received the B.S. degree in electronics and telecommunications, the M.S. degree in electrical measurements, and the Ph.D. degree in electronics from the Politehnica University of Timişoara, Timişoara, Romania, in 1994, 1995, and 2001, respectively. He joined the Department of Measurements and Optical Electronics, Faculty of Electronics and Telecommunications, Politehnica University of Timişoara, where he is currently a Full Professor. He has authored over 100 papers in international and national journals or in proceedings of peer-reviewed international conferences.
His current research interests include the applications of digital signal processing to measurements, parameter estimation, phasor estimation for power systems, and analog-to-digital converter testing.
Daniel Belega received the recognition as one of IEEE Transactions on Instrumentation and Measurement Outstanding Reviewers in 2009, 2012, 2013-2018, respectively, and the Certificate of Outstanding Contribution in Reviewing from the Digital Signal Processing journal in 2016 and 2018, the Signal Processing journal in 2017, and the Measurement journal in 2017. He received as author the Istaván Kollár Award for the Best Paper Presented at the IMEKO-TC4 2016 Symposium, entitled “Accurate sine-wave frequency estimation by means of an interpolated DTFT algorithm”. Also, he received as author the I2MTC 2017 Best Paper Award - 2nd Place for the paper entitled “A frequency-domain linear least-squares approach for complex sine-wave amplitude and phase estimation”.
Measurements Applications for Autonomous Systems
Department of Industrial Engineering, University of Trento - Italy
Autonomous systems are nowadays having an undisputed pervasiveness in the modern society. Autonomous driving cars as well as applications of service robots (e.g. cleaning robots, companion robots, intelligent healthcare solutions, tour guided systems) are becoming more and more popular and a general acceptance is now developing around such systems in the modern societies. Nonetheless, one of the major problems in building such applications relies on the capability of autonomous systems to understand their surroundings and then plan proper counteractions. The most popular solutions, which are gaining more and more attention, rely on artificial intelligence and deep learning as a means to understand the structured and complex natural environment. Nonetheless, besides the importance of such complex tools, classical concept of metrology, such as uncertainty and precision, are still unavoidable to a clear and effective application of modern autonomous systems applications.
In this tutorial, some measurement concepts will be revised in light of the autonomous systems domain. In particular, we will cover the main concepts of the statistical approach to measurements that will then be applied to:
- Uncertainty analysis and synthesis for autonomous systems localization
- Precision-based feedback for social robotics
- Distributed localization as an example of social behavior
Methodology of Measurement
Department of Industrial Engineering, University of Trento - Italy
The acquisition of information about physical quantities by means of sensors historically fostered the interpretation of measurement as a merely experimental activity. Conversely, measurement is a complex activity, far more complex than suitably connecting and reading an instrument. Indeed, measurement always requires descriptive activities to be performed prior of the execution of empirical activities to ensure both the correct implementation of the experiments and the interpretation of the obtained information.
In this tutorial, a conceptual framework highlighting the activities required to develop a measurement is presented and discussed. In such a framework, synthesized in Fig.1, measurement is envisioned as a three-level hierarchically structured process constituted of stages (planning, execution and interpretation), each one composed of activities performed through multiple tasks. A loose temporal sequence drives the execution of tasks (black arrows in the diagram), but the systematic presence of feedback (white arrows) emphasizes the complexity of the whole process.
The framework is very general. It supports a methodologically correct development of any measurement, regardless the kind of involved quantities (either physical or non-physical) or the field of application.
The framework is based on the following widely accepted assumptions:
- measurement is required to provide objective and inter-subjective information about properties of empirical objects, phenomena or events;
- measurement is not a self-motivating activity, but it is rather a goal-driven process: obtained information is usually employed as relevant input when deciding the best actions to be performed to achieve established goals, while satisfying given conditions;
- any empirical property can be, in principle, measured by performing logically equivalent steps;
- models are unavoidable in measurement, and they are co-determined by the measurement goals.
At the end of this tutorial the attendee will be able to answer such questions as: Which informative empirical processes can be considered measurements? How do I determine an adequate model for a given measurement? How do I estimate and express the quantity of information I achieve through measurement?
Dario Petri (M’92-SM’05-F’09) is a Full Professor of measurement and electronic instrumentation and the head of the Department of Industrial Engineering, University of Trento, Italy. He is also an IEEE fellow member, and the chair of the IEEE Smart Cities Initiative in Trento since 2015. He was the Head of the Department of Information Engineering and Computer Science of the same University from 2010 to 2012, the chair of the Italian Association of Electrical and Electronic Measurements (GMEE) from 2013 to 2016, the VP for Finance of the IEEE Instrumentation and Measurement Society from 2013 to 2018, and the chair of the IEEE Italy Section from 2012 to 2014. He was also a member of the Italian Group of Expert of Evaluation (GEV) for research in the area of Industrial and Information Engineering in 2016 and 2017. Dario Petri received the M.Sc. degree (summa cum laude) and the Ph.D. degree in electronics engineering from the University of Padua, Padua, Italy, in 1986 and 1990, respectively. During his research career, Dario Petri has been an author of more than 300 papers published in international journals or in proceedings of peer reviewed international conferences. His research activities cover different fields and are focused on data acquisition systems, embedded systems, instrumentation for smart energy grids, fundamentals of measurement theory, application of digital signal processing to measurement problems.
Using FPGA Based Imaging for Low Latency Measurement
Professor of Imaging Systems at Massey University
Image processing is often used for instrumentation and measurement because images can provide a large volume of often high quality data relatively quickly. The main advantages of images is also their main limitation – the large volume of data that needs to be processed, often is real time. In many applications, such as machine vision and sensing for control, processing images with low latency is important, if not critical. FPGAs are increasingly being used as an implementation platform for embedded real-time image processing applications because their structure is able to exploit spatial and temporal parallelism. They are also able to directly process the pixel stream from the sensor, without having to wait for the complete image to be captured first. This gives them a huge advantage in low latency applications. This tutorial will outline the issues associated with mapping an image processing algorithm onto an FPGA based platform. The tutorial will illustrate a range of image processing operations and introduce a range of techniques for efficiently implementing these on FPGAs
Donald Bailey has BE(Hons) (1982) and PhD (1985) degrees in Electrical and Electronic Engineering from University of Canterbury, New Zealand. He is a Senior Member of IEEE. He is currently Professor of Imaging Systems at Massey University, and is leader of the Centre for Research in Image and Signal Processing. Donald has spent 30 years applying image processing technology to a range of industrial, machine vision and robot vision applications. For the last 17 years one are of particular research focus has been exploring aspects using FPGAs for implementing and accelerating image processing algorithms. He is the author of many publications in this field, including the book “Design for Embedded Image Processing on FPGAs”, published by Wiley / IEEE Press.
Signal Acquisition from Conversion to Compression
Asma Maalej, Manel Ben Romdhane & Dominique Dallet
University of Carthage & University of Bordeaux
In radio receiver, the first step towards digital signal processing is the analog-to-digital conversion. It represents a crucial step since it has to satisfy the compromise between frequency, resolution and power. Many analog-to-digital converter (ADC) architectures are proposed to find ways to meet the challenges of the new technologies. In the scope of reducing power consumption or raising the performance of the system, asynchronous sampling step can be involved. In such case, a reconstruction step is needed to conserve the conventional digital signal processing. In addition, a compressing step can also be included to optimize power consumption while radio transmitting data for example. In this context, this tutorial, organized in three parts, focuses on analog signal conversion to compressed digital data.
In the first part, the theory of analog–to-digital conversion is presented as well as the way to implement it. Firstly, the sampling and the quantizing steps are illustrated. The conventional ADC architectures are presented while explaining how these two steps are accomplished. A state of the art of such ADCs shows the performances of ADC architectures according to their frequency, resolution and power consumption. Secondly, other innovative ADC architectures are presented. Based on non-uniform or asynchronous sampling, these architectures aim to reach higher ADC performances compared to conventional architectures.
Among innovative architectures, this tutorial focuses on level-crossing ADC. In this architecture, a sample is only acquired when the signal value reaches one of the given levels of amplitude leading to an asynchronous sampling. The LC-ADC is often used with low-rate signals. Besides, if the signal varies slightly, the LC-ADC cannot usually reach all the levels and the number of overall samples is thus reduced compared to the one of conventional ADC architectures. Unfortunately, when sampling happens non-uniformly, the conventional processing of digital signals cannot be adopted directly. A reconstruction step is therefore needed in order to recover uniform samples.
In the second part of this tutorial, reconstruction algorithms that ensure high signal-to-noise ratio (SNR) are presented. Generally, they are divided into three sets: the matrix reconstruction, the iterative reconstruction and the interpolation reconstruction. However, after LC-ADC acquisition, only matrix and iterative reconstructions are explored regarding the output configuration. On the one hand, samples are delivered when a level is reached and the LC-ADC data output continues to be zero between two successive samples regarding the internal counter clock. The resulted digital output signal is thus sparse for almost inactive signals during some time intervals. In this case, the matrix reconstruction is chosen since the obtained matrix is sparse, besides, multiplication and inversion operations become easy. Greedy optimization algorithms, for instance orthogonal matching pursuit (OMP), are the least complexity reconstruction algorithms that fit the requirements of high SNR. In fact, for a signal which is 95 % sparse, the SNR can reach 80 dB while it is only equal to 5 dB for 85 %-sparse signal. On the other hand, if the LC-ADC sends the samples values besides of the time interval between two successive sampling instants, the interpolation reconstruction is preferred. The polynomial reconstruction such as cubic Spline interpolation is chosen as it can reach a 70-dB SNR. Since the LC-ADC chooses only the most significant samples, it includes a compression step. Whereas, in some fields of applications more compression is needed to further optimize the power consumption, the memory management, etc.
In the third part of this tutorial, the compression algorithms, such as the digital sparse decomposition and the cubic Hermitian interpolation, are presented. The digital sparse decomposition consists of selecting a basis matrix, or dictionary, in which the signal is sparse. The dictionary is generated thanks to the OMP algorithm, and thus, the resulted vector contains few non-zero values. In case of hard implementation constraints, dictionary might be chosen to be built with simple wavelets such as the Haar wavelets. In contrast, in case of biomedical applications such as electrocardiogram signal acquisition, the Daubechies wavelets are preferred. As far as it concerns the cubic Hermite interpolation, the algorithm converts the samples values and times generated by the LC-ADC into 4-D compact cubic Hermitian vector. It considers two successive samples values and their derivatives axis. By considering the derivative axis, cubic Hermitian algorithm allows reducing samples that occurs when a low rate signal varies widely. As instance, in biomedical signal such electrocardiogram signal, Hermitian algorithm allows reducing samples on the QRS complex.
Tutorial attendees expect to obtain an idea about ADC architectures, particularly the LC-ADC architecture, besides of some reconstruction and compression algorithms that fit with LC-ADC. In particular, the attendees will have the main keys to implement some algorithms such as cubic Spline interpolation or digital sparse decomposition.
Asma MAALEJ is an Assistant Professor in Telecommunications at the National Engineering School of Tunis. Since 2008, she has been a member of GRESCOM research Laboratory at the Higher School of Communications of Tunis (SUP’COM). Her main researcher activities are focusing on mixed signal processing, circuits and systems.
Manel BEN-ROMDHANE is an Assistant Professor in Telecommunications at SUP’COM. Since 2005, she has been a member of GRESCOM research Laboratory. Her research activities are in the area of mixed signal, circuits and systems for radio communications and biomedical signal acquisition. She has authored more than 40 papers in international journals and proceedings of peer-reviewed international conferences.
Dominique DALLET is a Full Professor of digital electronic design at the engineer school Bordeaux INP, ENSEIRB-MATMECA. He is the Head of the Electronic Design group of the IMS Laboratory. Pr. Dominique Dallet is doing his research at the IMS Laboratory, University of Bordeaux, Bordeaux INP, CNRS UMR 5218. His research activities are mainly focused on data converter modelling and testing, digital signal processing implementation, electronic design of analog and mixed electronic circuits. He has authored over 200 papers in international and national journals or in proceedings of peer-reviewed international conferences, book chapters, and patents.
Bayesian Inference for Measurement Problems
Graz University of Technology
Bayesian Inference for Measurement Problems Extracting information from measurements is an essential part in measurement signal processing. In order to extract information about the inner state of a process, different signal processing approaches are known. Classical non-parametric methods, e.g. correlation or DFT-based methods, are well established techniques. Yet they are limited to problems with a simple relation between the data and the unknown quantity. Parametric signal processing methods, also known as model based signal processing methods, directly incorporate these models into a solution strategy. These approaches are typically applied to parameter estimation problems, state estimation problems, or the even harder class of inverse problems. This tutorial discusses the treatment of model based measurement problems within the Bayesian inferential framework. The Bayesian framework is marked by maintaining a probabilistic description for all quantities. The statistical modeling process has to address the formulation of the data/state model, the measurement model, the noise model, and the prior model. Questions like: ”How should we describe the quantity to be estimated?”, “How are the measurements distorted by noise?”, or “Is there any knowledge on the quantity to be estimated”, have to be answered during the modeling process. Bayes law provides a unified approach to combine all models by means of the likelihood function and the prior distribution, resulting in the posterior distribution. Inference is referred to the exploration of the posterior distribution. While point estimators like the maximum a posteriori (MAP) estimator or the maximum likelihood (ML) estimator can be derived naturally from the unified Bayesian formulation, the Bayesian framework also allows for the computation of any expectation value for the solution. Numerical sampling schemes based on Markov chain Monte Carlo (MCMC) techniques have to be employed to construct a Markov chain representing the posteriori distribution. By means of Monte Carlo integration any expectation over the posterior can be computed as a weighted sum over the Markov chain, enabling aspects like uncertainty quantification (UQ). This tutorial will give a comprehensive overview about the Bayesian approach for measurement problems and inference over the quantity of interest. The Bayesian modeling process will be discussed and state of the art algorithms for MCMC will be presented
Markus Neumayer was born in Kitzbühel, Austria in 1983. He studied electrical engineering at Graz University of Technology (TU Graz) and received the Dipl. Ing. degree and the Dr. techn. in 2008 and 2011, respectively. He is currently a senior scientist with the Institute of Electrical Measurement and Measurement Signal Processing at Graz University of Technology. During his PhD he was with the Department of Physics at the University of Otago, Dunedin/NZ, where he did research on Bayesian Methods for inverse problems under the supervision of Prof. Colin Fox, as well as Prof. Jari Kaipio (University of Auckland/NZ). His PhD thesis was awarded by the Austrian government in 2012 (Award of Excellence). In 2012 he also received a research award from the Styrian government for his contributions in the field of numerical simulations. His research interests include physical modeling of measurement problems/systems and sensors, numerical methods, inverse problems, Bayesian methods and statistical signal processing.
Signal Quality – From Wearables to Hospitals
Mohamed Abdelazez and Sreeraman Rajan
Heartrate monitors are becoming ubiquitous and are being used by both athletes and the general public to keep track of their health. Heartrate monitors are just an example of the wearables currently available to the public; other examples include oxygen saturation monitors, activity monitors, and muscle activity monitors. Wearables are typically not used in a controlled environment; therefore, the quality of the collected signals might be questionable. Even in a controlled environment such as a hospital, deterioration in the quality of the collected signals can lead to false alarm and reduction in the quality of patient care. As the signals are used to inform users about their health, it is imperative that the signals are of acceptable quality. Signal Quality is the field of identifying and improving the quality of collected signals. Signal Quality can be divided into four categories: 1) detection; 2) identification; 3) quantification; and 4) mitigation. Detection is the acknowledgement of the presence of noise in the signal. Identification is the determination of the type of noise. Quantification is the estimation of the level of the noise. Mitigation is the reduction of the noise through noise removal techniques. This tutorial will provide a high level overview of the different techniques in each of the Signal Quality categories.
Mohamed Abdelazez is pursuing his PhD in Electrical and Computer Engineering and is a Vanier scholar at Carleton University. His Master’s thesis title was ‘Electrocardiogram Signal Quality Analysis to Reduce False Alarms in Myocardial Ischemia Monitoring’ and was on signal quality and its application in reducing false alarms. Mohamed’s Master’s thesis received a Senate Medal from Carleton University.
Sreeraman Rajan received his BE from Bharathiyar University, India, M.Sc from Tulane University, USA and PhD from University of New Brunswick, Canada. He is currently Tier 2 Canada Research Chair at Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada and Associate Director of the Ottawa Carleton Institute Biomedical Engineering. Prior to joining Carleton, he was a Senior defence scientist in Defence Research and Development Canada and an Adjunct Professor in School of Electrical Engineering and Computer Science, University of Ottawa, Canada and Royal Military College, Kingston, Canada. He has worked in nuclear science and engineering research, fiber optical communication modules and system, and non-invasive medical devices industries. He is a Senior Member of IEEE, chair of IEEE Ottawa EMBS Chapter and a member of Instrumentation and Measurement Society. He has served IEEE at the chapter, Section, Region and MGA level. He is a recipient of the 2011 IEEE MGA Achievement award, 2012 Queen Elizabeth Diamond Jubilee Medal, 2017 IEEE Wally Reed Outstanding Service Award and 2018 IEEE Ottawa Section’s Outstanding Engineer Award. He has more than 130 journal and conference papers and has served as General Chair, Technical Program Chair and program/steering committee member and other organizational level in several IEEE conferences.
Tutorial on Measurement Uncertainty
Measurement Standards Laboratory of New Zealand (MSL)
Overview: The tutorial is an introduction to the calculation of measurement uncertainty, primarily for those involved in making or managing measurements. The tutorial will be interactive lecture style with examples and plenty of time for questions and discussion. There is an emphasis on the rationale for uncertainty analysis, and on the making of traceable measurements. Modest mathematics skills are required, and a USB drive with a pdf of the slides and notes pages will be provided to participants.
- Introduction: Why we need measurement uncertainty, a brief history of error analysis and uncertainty analysis, the origin of the GUM (BIPM Guide to the Expression of Uncertainty in Measurement), nomenclature
- Type A methods: the nature of measurement error, basic statistics, mean and standard deviation, degrees of freedom, confidence intervals, standard and expanded uncertainty
- Type B methods: Theory, other peoples work, subsidiary experiments. Making measurements traceable and auditable.
- Propagating and Combining Uncertainty: Measurement models, propagation of error, propagation of uncertainty, total uncertainty, effective degrees of freedom.
- Seeking help: Future developments, the BIPM task group on the GUM, GUM supplements.
Background: The tutorial is drawn mainly from a one-day Measurement, Uncertainty, and Calibration training course that is run annually by staff from MSL and been under continuous development for more than 30 years. This tutorial will be aimed slightly higher than the usual course since it is expected that the attendees will have knowledge of basic statistics and be familiar with basic algebra and calculus. Historically, there have been between 30 and 100 New Zealanders attending the MSL course each year.
Dr Rod White is a distinguished scientist working in the Temperature and Light Section of MSL. For more than 30 years, he has conducted MSL’s annual training courses on Measurement, Uncertainty, Calibration, and Temperature Measurement on which this tutorial is based. He is the co-author of a well-known text Traceable Temperatures, and author of more than 100 research papers. For more than a decade he was the chairman of the Uncertainty Working Group of the BIPMs Consultative Committee on Thermometry, and he continues to be an active member of several of the working CCT groups including chair of the Task Group on Guides in Thermometry. Rod has worked at the NIST (USA) and at NIM (China) as a guest researcher on several occasions assisting with the now successful NIST/NIM collaboration to measure the Boltzmann constant by Johnson noise thermometry. He won the NZ Royal Society Science and Technology medal in 1997 for contributions to Temperature Metrology, and the Cooper Medal in 1998 for the invention of the resistance bridge calibrator. In 2010, he was awarded a D.Sc for his contributions to temperature metrology.
What is Impedance and Dielectric Spectroscopy?
Rosario A. Gerhardt
Georgia Institute of Technology
Impedance and Dielectric Spectroscopy (IS/DS) is a broadband characterization tool that can provide a wide range of information not often accessible any other way. Equipment suitable for making measurements over as many as 12 orders of magnitude is now easily available and the technique has found niches in the characterization of battery materials and fuel cells. However, the method is not limited to these types of applications. Responses measured may be related to the bulk of the material or the device, or related to the presence of internal and/or external interfaces and their interaction with the environment they are subjected to as well as to the frequency and voltage signal used. However, because many materials and devices have components with resistivity that can vary over 20 orders of magnitude (when including the most insulating to the most conducting components), interpretation can be very complicated. In fact, for the same material or device, the complex graphs and frequency explicit spectra can be quite different and lead to confusion unless the user has been trained in the proper analysis methods. Additional characteristics associated with the dielectric, optical or magnetic response of a given material or device can further complicate the analysis.
In this tutorial, the technique of impedance and dielectric spectroscopy will be described in the simplest possible terms. It will begin by describing the expected responses for the real and imaginary impedance of a wide range of material and device types. Then the conversion to three other formalisms known as admittance, electric modulus and permittivity will be used to demonstrate the detailed information that is often hidden inside of the partially analyzed data. Examples will be provided that will help not only understand the physical processes that are happening inside of the material/device but also develop an understanding of how to control the outcome. Examples ranging from materials used in insulating layers in integrated circuits and packaging materials to highly conducting materials used in solar cells and batteries will be provided. It will be shown that it is possible to relate the spectra obtained to the presence of certain key responses: charge storage, electronic conduction, surface adsorption, switching phenomena and many others. Complementary techniques that are used to corroborate the physical assignments will also be included. The tutorial will end with examples that demonstrate that this technique is exceptionally good for establishing quality control in a production environment and/or to assess service life of electronic and non-electronic components in a non-destructive way. Additional examples may be included depending on the interests of those attending the tutorial.
Rosario A. Gerhardt is a full professor at the Georgia Institute of Technology, where she has been on the faculty since 1991. In addition to being a member of IEEE, she is also a member of MRS, ACers, ASNT , AAAS and Sigma Xi. In 2017, she was awarded the ACerS Friedberg Lecture Award which recognized her teaching, research and patent contributions. Her research is focused on developing an understanding of the relationships between the structure of materials and devices and their electromagnetic response. Her work has appeared on many different journals including IEEE and physics journals. She is currently writing a textbook on the technique of impedance and dielectric spectroscopy to be published by John Wiley & Sons. The textbook is based on her lecture notes for the graduate class that she has developed over the last 26 years and covers all types and forms of materials and devices. She was invited to compete for the title of Distinguished Lecturer at the 2018 I2TMC held in Houston. She is scheduled to present a tutorial lecture at the upcoming Electronic Materials Applications Conference to be held in Orlando in January 2019 entitled “'Impedance spectroscopy: basics, challenges and opportunities”.
Uncertainty-Aware Design of Measurement Systems Based on Drones
Francesco Picariello & Luca De Vito
University of Sannio, Laboratory L.E.S.I.M.
Unmanned Aerial Vehicles (UAVs) are becoming popular as carrier for several sensors and measurement systems, due to their low weight, small size, low cost and easy handling, which make them flexible and suitable in many measurement applications, mainly when the quantity to be measured is spread over a wide area or it lies in human-hostile environments.
The tutorial will introduce the architecture of the drone and show how the drone and its subsystems can be properly sized and characterized, according to the specification of the sensors and measurement systems that will be embedded on it.
It will be shown that the drone itself can interact with both the measurand and the sensors, thus influencing the measurement results and the subsystems and the parameters that can influence the on-board sensors and measurement systems will be highlighted. For this reason, the drone equipped with the sensors must be thought as a measurement platform and needs also a metrological characterization as a whole.
In a second part of the tutorial, an overview of the types of sensors and measurement systems that can be embedded on the drone will be given, by presenting their operating principle and applications. Some measurement applications will be described. For each application, in order to quantify the measurement uncertainty, the measurement chain is analyzed and the main uncertainty sources are identified.
Luca De Vito (M’10–SM’12) received the master’s (cum laude) degree in software engineering and the Ph.D. degree in information engineering from the University of Sannio, Benevento, Italy, in 2001 and 2005, respectively. His master’s thesis was on automatic classification and characterization of digitally modulated signals. He joined the Laboratory of Signal Processing and Measurement Information, University of Sannio, where he was involved in research activities. In 2008, he joined the Department of Engineering, University of Sannio, as an Assistant Professor in electric and electronic measurement. He received the National Academic Qualification as an Associate Professor and as Full Professor in 2013 and 2018, respectively. Since 2015, he collaborates with the magnetic measurement (TE-MSC-MM), the control and electricity (TE-CRG-CE) and the hadron synchrotron collective effects (BE-ABP-HSC) sections of CERN, Switzerland. He is member of the IEEE since 2010, he is member of the IEEE Instrumentation and Measurement Society, of the IEEE Aerospace and Electronic System Society, and of the IEEE Standards Association. He is Senior Member of the IEEE since 2012. He member of the AFCEA and is Young President of the AFCEA Naples Charter.
He published more than 120 papers on international journals and conference proceedings, mainly dealing with measurements for the telecommunications, data converter testing and biomedical instrumentation.
Francesco Picariello received the B.Sc. ('09) and M.Sc. ('12) degree in electronic engineering, from the University of Salerno, Faculty of Engineering and the Ph.D. degree in information engineering from the University of Sannio, Benevento, Italy, in 2016. He is currently working toward the Faculty of Engineering from University of Sannio, Laboratory L.E.S.I.M. where he is an assistant researcher in field of electrical and electronic measurements. His research interests include electrical and electronic circuit and system modeling, applied electronics, embedded measurement system, microelectronics, power electronics, wireless sensor networks and road safety. Francesco Picariello published about 50 papers in international journals and in national and international conference proceedings on the following subjects: embedded systems, intelligent sensors, wireless sensor networks, distributed measurement systems, mobile devices, power consumption analysis for workstation computers, augmented reality, mechanical measurement at cryogenic temperatures and aerial photogrammetry. He has two patents pending, at national level in Italy, in field of augmented reality and mobile learning systems.
Distributed Photonic Sensing for Power and Energy Industries
University of Strathclyde
Optical sensors and photonic devices have technically matured to the point that they are increasingly considered as alternatives for their electronic counterparts in numerous applications across the industry. In particular, the utilization of optical sensors has been considered for harsh, high-voltage or explosive environments where conventional transducers are difficult to deploy or where their operation is compromised by electromagnetic interference.
This prospective tutorial will explain the motivation for research on fiber-optic sensors, highlight the basic theories underlying their operation, and present selected examples of R&D projects carried out within the Advanced Sensors Team in the Institute for Energy and Environment at the University of Strathclyde and within Synaptec, targeting a range of industrial applications. The goal is to highlight great potential of optical sensors and to enrich recipients’ experience in instrumentation and measurement using alternative, non-electronic methods.
Alternatively, for audiences with greater photonics sensors awareness, the tutorial can be tailored to solely focus on reporting the most recent progress in fiber sensing research for power and energy industries carried out within the research and engineering teams. In this instance, the tutorial will highlight and detail specific examples of the measurement needs within the power and energy sectors and report on the novel approaches in fiber sensing to address these needs. In particular, it will cover such applications as distributed voltage and current measurement for power system metering, protection and control in the context of terrestrial and offshore energy transport infrastructure; structural health monitoring of wind turbine foundations; and measurement of the loss of loading within prestressing steel tendons used for reinforcing concrete pressure vessels and containment vessels in nuclear power stations. As the potential good solutions to these respective measurement needs, this tutorial will introduce such emerging technologies as the hybrid fiber Bragg grating (FBG) voltage and current sensors; and novel methods of fibre optic sensor packaging in metallic components for deployment in harsh industrial environments. The tutorial will present the most recent progress in these thematic areas.
Dr Pawel Niewczas is a Reader in the Department of Electronic and Electrical Engineering at the University of Strathclyde. He is leading the Advanced Sensors Team within the Institute for Energy and Environment in the same department. His main research interests center on the advancement of photonic sensing methods and systems integration in applications that lie predominantly in power and energy sectors. He has carried out a unique portfolio of research programs, generally focusing on fiber based spectrally encoded sensors, and addressing such issues as sensor design, fabrication, packaging, deployment, and interrogation in challenging environments. The current key strategic applications of his research include such areas as power system metering, control and protection; wind turbine structural health monitoring; downhole pressure, temperature, voltage and current measurement; and sensing in nuclear fission and fusion environments. He has published over 100 technical papers in this area and holds 3 granted patents. Externally, he is a co-founder and R&D Director of the spin-out company Synaptec.
Electrical Capacitance Tomography: From Principle To Applications
The University of Manchester Manchester
As a non-intrusive visualization and measurement technique, industrial tomography has been developed for more than 20 years. Among various industrial tomography modalities, electrical capacitance tomography (ECT) is the most mature and has several advantages, such as non-radioactive, both non-intrusive and non-invasive, fast imaging speed, withstanding high-temperature and high-pressure, and of low cost. ECT is based on measuring capacitance from a multi-electrode sensor and reconstructing the permittivity distribution of e.g. a multiphase flow. The internal information obtained by ECT is valuable for understanding complicated phenomena, verifying CFD models, measurement and control of complicated industrial processes. ECT also has potential for healthcare and medical applications.
Because of extremely small capacitance to be measured and the soft-field nature, ECT does present challenges in circuit design, solving the inverse problem and re-engineering. Our latest ECT system is based on an AC capacitance measuring circuit with high-frequency sinewave excitation and phase-sensitive demodulation, and hence it is called AC-based ECT system. It can generate online images with a typical speed of 100 frames per second, a signal-to-noise ratio (SNR) of 73 dB and user-friendly GUI. An important feature of this system is that it is not affected by electrostatics, which may result from e.g. a gas/solids flow. A latest development is real-time visualization of a multiphase flow in 3D.
Image reconstruction for ECT is challenging because of the inverse problem is ill-posed and ill-conditioned. Many image reconstruction algorithms have been developed for ECT. But only few of them are popular, such as linear back-projection (LBP) and Tikhonov as single-step algorithms and Landweber iteration as an iterative algorithm. During this tutorial, Landweber iteration will be derived based on process control concept.
The AC-based ECT system has been used in many applications. One of the challenges is re-engineering ECT sensors, in particular, to deal with high-temperature and high-pressure. In this tutorial, ECT will be discussed from principle to sensor design and industrial and medical applications, including
- Gas/oil/water flow measurement in the oil industry
- Wet gas separation in the gas industry
- Measurement of gas/solids flows in pneumatic conveyors and cyclone separators in the process industry
- Gas/solids fluidized bed measurement in the pharmaceutical industry (including Wurster fluidized bed), and for clean use of coal (including circulating fluidized bed combustor and fluidised bed reactor for methanol-to-olefin conversion )
- Some medical applications, like tooth canal treatment and total hip replacement.
Wuqiang Yang is a Fellow of the IEEE, Fellow of the IET (formerly the IEE) and Fellow of the Institute of Measurement and Control (InstMC). Since 1991, he has been working with University of Manchester (formerly UMIST) in the UK and became a professor in the School of Electrical and Electronic Engineering in 2005. His main research interests include industrial tomography, especially electrical capacitance tomography (ECT), sensing and data acquisition systems, electronic circuit design, image reconstruction algorithms, instrumentation and multiphase flow measurement. He has published over 500 papers, is a referee for over 50 journals (including 6 IEEE journals), Associate Editor of IEEE Trans. IM, editorial board member of 6 other journals (including Meas. Sci. and Technol.), guest editor of about 10 journal special issues and visiting professor at 6 other universities. He received many awards, including the 1997 IEE/NPL Wheatstone Measurement Prize, the 1997 Honeywell Prize from the InstMC, the 2000 IEE Ayrton Premium, 2006 Global Research Award from the Royal Academy of Engineering, and 2009 IET Innovation Award Finalist. He was a Co-Chair of I2MTC’2017 and is one of key organizers and Honorary Chair of IEEE International Conference on Imaging Systems and Techniques. From 2010 to 2016, he was an IEEE IMS Distinguished Lecturer. His biography has been included in Who’s Who in the World since 2002.
Metrology for Microwave Measurement
Measurement Standards Laboratory of New Zealand
Microwave technologies are continuously evolving. Researchers and engineers with a need to characterize and understand the performance of innovative products must have a good grasp of the accuracy that measurements can provide. Our understanding of measurement uncertainty, as it applies to complex quantities, has advanced rapidly in the past ten to fifteen years. Much of this new knowledge has been incorporated in the latest revision of the European guidelines on Vector Network Analyser calibration (EURAMET cg-12 v.3, March 2018), which is destined internationally to become the de facto standard for best-practice in vector analysis.
The notions that underpin the new cg-12 guide are general metrological principles that apply across all sorts of microwave measurements. This tutorial looks at metrological traceability to the SI and the need to account for the contributions to uncertainty from various measurement errors. We explain the need for a measurement model in which sources of error are identified and their influence on a measurement quantified. We explain how to develop models and how to use them with the aid of software that takes care of laborious computational details. Models identify the strengths and weaknesses of a measurement procedure. This leads to a better understanding of why certain practices in the design of measurements lead to better outcomes. We also introduce an open-source tool for measurement modelling and uncertainty calculation written in Python.
Blair Hall is responsible for radio and microwave frequency measurement standards at the Measurement Standards Laboratory of New Zealand. He has written more than 100 reports and papers in the field of radio and microwave frequency metrology with a special focus on measurement uncertainty. He has also developed algorithms for uncertainty propagation that provide rigorous metrological traceability to the SI for measurements made with Vector Network Analysers (VNAs). Blair studied Physics at Victoria University of Wellington and at the Ecole Polytechnique Fédérale de Lausanne (EPFL, Switzerland). He has worked at METAS, the Swiss national metrology institute (Bern, Switzerland), and lectured at Massey University in Physics and Electronics.
Intelligent Edge Computing Technology for Internet-of-Things (IoT)
RF Test Solutions & ADLINK
IoT will connect billions of sensors and devices together and it becomes clear that the infrastructure of the Internet cannot handle the magnitude of raw data if every sensor individually sent all its data to a central processing and analysis location. There are other pitfalls with a single processing solution such as security, bottlenecks and access, as well as latency between analysing the data collected in one location and initiating a response at another. Edge computing is a concept with places distributed intelligent processing remotely at locations where groups of sensors or controllable devices are located and data is processed before moving to either the cloud, a central analytic platform, or peer to peer directly between intelligent edge devices.
This architecture requires robust, high reliability, flexible measurement and acquisition systems with the ability to directly interface to the vast range of sensor types, actuators, transducers and industry standard communications protocols. Information needs to be collected processed, analysed and acted upon with edge computers having sufficient processing capability for the application.
In this tutorial, ADLINK will discuss how their true end to end Edge IoT solution works and bridges the IT and OT divide by allowing users to connect the unconnected, stream anywhere using peer to peer data movement technology, and to be able to control the edge. Case studies and examples will help attendees to understand the concepts and benefits of this approach.