Cafe Scientific, Southampton, UK, past talks

Latest update of this file 09 July, 2017

Some details on past SWA science cafe talks in 2010 , including transcripts of talks and Q&A
Some details on SWA science cafe talks of later 2011
Some details on SWA science cafe talks of early 2011
Some details on SWA science cafe talks of early 2012
Some details on SWA science cafe talks of mid 2012
Some details on SWA science cafe talks of end 2012, including transcripts of talks and Q&A
Some details on SWA science cafe talks of early 2013, including transcripts of talks and Q&A
Some details on SWA science cafe talks of mid 2013, including transcripts of talks and Q&A
Some details on SWA science cafe talks of late 2013, including transcripts of talks and Q&A
Some details on SWA science cafe talks of early 2014, including transcripts of talks and Q&A
Some details on SWA science cafe talks of mid 2014, including transcripts of talks and Q&A
Some details on SWA science cafe talks of end 2014, including transcripts of talks and Q&A
Some details on SWA science cafe talks of early 2015, including transcripts of talks and Q&A
Some details on SWA science cafe talks of mid 2015, including transcripts of talks and Q&A
Some details on SWA science cafe talks of end 2015, including transcripts of talks and Q&A
Some details on SWA science cafe talks of early 2016, including transcripts of talks and Q&A
Some details on SWA science cafe talks of mid 2016, including transcripts of talks and Q&A
Some details on SWA science cafe talks of end 2016, including transcripts of talks and Q&A
Some details on SWA science cafe talks of early 2017, including transcripts of talks and Q&A
To return to the main "4mg" Soton Sci Cafe file
To return to the main "ad free" Soton Sci Cafe file
Some summaries etc of past talks held at the venue, The Southwestern Arms (upstairs room) , 36 Adelaide Rd, St Denys, SO17 2HW
Some hosts are not alowing remote linking now , so to view a "forbidden" picture you have to right click on the mouse and select "view". Not verbatim, and there will be homonyms, transcription, transliteration, typing and spelling errors and misattributions in the following write-ups. Q&A , grouped under one "Q" or ? terminator tend to be dialogue with / multiple questions from one enquirer. ? for unheard / masked words , ??? for phrases.




Monday, 12 Dec, 2016, Dr Catherine Mercier supported by Dr Frank Ratcliff , both of Wessex Academic Health Science Network, Southampton: The 100,000 Genomes Project , focus on rare disease and cancer. 1 3/4 hr, 27 people Wessex Academic Health Science Network is a facility for driving innovation forward into the NHS and DR Mercier is a clinical geneticist at the Soton General Hospital. The talk will be about The 100,000 Genomes Project and the wider question of whether you'd have your genome sequenced. Dr Catherine Mercier I believe we're in the midst of a revolution. If you think of thr industrial revolution, it did not happen overnight, taking about 100 years for the changes to come to the fore. I think there are similiraities with the genomic revolution. 1953 the structure of DNA was discovered by Watson , Crick and Franklyn. I believe a doctor in 2050 will still be looking at their medical records, blood pressure , what medications the're on but also on the screen info on the person's genetics or even their genomics , the entirety of the DNA, the coding and non-encoding parts in-between and how they interact. So in my career I believe genomic medicine will become much more main stream. We are made of about 3 billion cells and within the cell is the nucleus which contains 23 pairs of chromosomes. Take a single chromosome and unwind the DNA, and along the string is gene after gene. Genes are important as they encode for proteins , essentially we are all made of different types of proteins. The gene is the smallest unit of heridity , about 20,000 in the human genome. An onion has 4 times that number. I'm a clinical geneticist , I see individuals and families , who have a conditipn we believe to be due to an alteration in their DNA. Perhaps a mistake in a single gene. Perhaps a child with multiple congenital abnormalities , prhaps heart abnormality . Someone with absent thumbs I saw recently. I have to try an find an underlying cause why those abnormalities can be found together. I also specialise in cardiac genetics . I look after families with hypotrophic cardiomyopathy , abnormal thickening of the heart walls and heart muscle pump fails to work as well as normal and predisposition to abnormal heart-rythyms. The sort of cases where an apparent fit and healthy footballer collapses on the pitch. Often they are inherited cardiac diseases. So a family member says it happened to my brother, what is the chance of it happening to me. Will another child be affected in the same way as a first affected child. I think about DNA and chromosomes. A child might have a whole extra copy of a chromosome like trisomy-21 or Downs Syndrome. When I started genetics, the best way of looking at somebody's DNA was to look down mucroscope basically. If you look at a cell at an appropriate stage of division , you can see the chromosomes. You can see whether there is an extra one or one missing or even perhaps a chunk of one missing, causing the diagnosis. So to look at greater resolution, gene by gene I would have to look at the patient and think which of the 20,000 genes might have a mistake in it that is causing the problem. Thinking of them 1 by 1 and I'd send them off for Sanger sequencing. The answer might take 3 months and be a yes or if no then go back and rethink, which was the next best candidate. A very time consuming process and all the time families are waiting for an underlying molecular diagnosis. Recently the tech that allows us to look at DNA has changed unrecognisably. Insted of looking down a microscope at relatively little detail a twhat the Chromosomes look like or 1 gene at a time, I can now ask for the entire genome , all 20,000 genes and the DNA between, to be sequenced and done in about 48 hours. That is the same test that took the human genome project 10 years to do. The first sequencing was an international collaboration , nillions if not billions of pounds . We can now get that data overnight, that is why things are happening fast in the world of genomics, the tech has changed so much. The cost of sequencing the genome. 15 years ago - 100 million, we now talk of the 1000 dollar genome. I used to put out my rod and fishing line and ask for alterations in one specific gene and get 1 result back. Now when you sequence an entire genome , we've all got alterations in our genome that make us human. Part of my job now is to sort out which of these genetic variants are disease causing and which are just part of normal human variation, that makes each person unique. So in some ways my job is easier as we can sequence more, but also harder as there is much more interpretation. It does mean its an exciting field to be part of. My work at the hospital is with patients who have had years of investigations, perhaps initially as a new born. Some we've been seeing for 10 year sand we still don't know whats causing their problems. We know its likely to be genetic but not now exactly what the gene change that is to blame. There is a support group called SWAN, Syndromes Without A Name. They are parents of children of whom doctors can only say , I'm sorry I don't know what this is, I don't know the name of it, I don't know what the recurrance risk is. That is very isolating for a child with disabilities, for who no explanation can be given. With technology change and families out there who badly need a genetic diagnosis, in 2012 ther ewas the launch of the 100,000 genomes project in England. A government funded project through the NHS , its not research and its not as such, mainstream medicine. Its whats called a transformational project. We are working on having genome sequencing being incorporated in the mainstream of the NHS, hopefully as a legacy on completing this project. We're sequencing 100,000 genomes, its not quite 100,000 patients . The project is split into a rare disease arm , the sort of patients I see, but also a cancer arm. We bid at University Hospitals Southampton in a competitive bidding process and chosen as 1 of 15 hospitals to host a genome medicine centre,in 2015. As well as hospitals being involved, there are also industry partners as its realised that it won't be the NHS that goes on to drug developement for example . This will run parallel with all the extra data we are creating. The NHS is not resiurced for developing new medicines, so there are partners in private industry also. Its not just UHS area we are recruiting patiens, but around Wessex, Portsmouth, Basingstoke, Winchester and perhaps Bournemouth. The most important thing for me as a doctor is I'm hoping that many of my patients that I know to have an anderlying genetic diagnosis , but have not been able to find it. I'm hoping that for them , if enrolld in this project, they will get an answer. It is important that the process is transparent and involves a clear consent process. We spend about 40 minutes with a patient at the outset explaining what it means having your genome sequenced and chances to answer their questions. We are about the first healthcare system in the world going about this. I was talking to some colleagues at a recent German conference and they said they could never do this as their healthcare systemcis not joimed up enough, we could not get the right people to talk to one another. But with the NHS, the data would be stored centrally and hopefully the benefits will be huge. We hope to find some new gwnes along the way. Theyare always there , its just we don't know what they do. Every week the scientific literature gives the name of a gene and what it does. Is it a cause of intellectual disability or some unusual familial condition. Hopefully with all this data will come along a lot of medical insights. Its possible we will start to stratify patients , according to their genome. About 15% of hospital admissions have an adverse drug reaction involved at some point. If we could find out what it is about a person , that causes a bad reaction ot a drug, and not give them that drug. That would save significant morbidity and also save money. There is a particular HIV drug , 5% are super sensitive to , and if you have that sensitivity , that genomic signature, that medication is not used, so this is already happening. The project is also being used to stimulate the UK genomics industry. We are hoping for patient equity across the country. Every patient with a rare disease, or a particular type of cancer, has access into this project. Half the project is recruiting people with rare diseases . A rare disease is something that affects less than 1 in 2000 people. You may not think that is a huge health care burden but there 1000s of rare diseases, so many that 1 in 17 of us will have a rare disease of some kind. S oat least 2 people in this room. Look at rare diseases as a whole group, then they ar e pretty common, so an important healthcare burden. 80% of rare diseases have a genetic cause and genetic diseases are stil lthe largest cause of death in the first year of life. The other half of the project is enrolling patients with various malignancies. Cancer ,essentially, is due to DNA errors. We're born with our germline DNA , there are certain cell-lines that will continue to divide through life, such as skin or gut or lung cells. The instructuons telling the cells how to divide is in our DNA. But if you accumulate mistakes in your DNA , by too much sunlight or cigarette smoke or poor diet , then the instructuion manual is damaged and poorly regulated cell multiplication can result and a tumour. If we can learn the genomic signature of those dividing cells, we can much more exquisitly target treatment. Again this is already beginning to happen. Non-small-cell lung cancer , we routinely look for mutations in the EGFR ? gene and stratify treatments accordingly. At UHS we're including patients with breast, prostate , colon and lung cancer. We take DNA from their germline and compare it to the DNA in the tumour, with the mistakes in it. Thats why the 100,000 GP is not quite for 100,000 people , because people in the cancer arm of the project will have 2 genomes. If we have a suitable patient , see the patient in clinic, go through the consent process in detail, take DNA from the patient . If they appear to be the only person in the family affected and seems tobe a recessive disease , we take blood from the patient and their parents. Basically its a complex spot the difference puzzle. With dominently inherited conditions we want to get as many samples from affected family members as possible. Then spot the genetic change that tracks through a family withthat disease. For the cancer arm , a blood sample and a tumour sample DNA. We also need lots of patient medical records or data , because interpreting those genomic variants , is impossible without knowing what kind of job is done by the gene that has the mistake in it. So we have to marry up patient details and DNa samples . That involves inputing data about people's medical history. The DNA is all sequenced in a super-factory near Cambridge. The Sanger centre, the results fed back to our lab in Salisbury . Thenv the doctors involved with recruiting patients wil lbe involved in partial interpretaion of results and feeding it back to the families. We are hoping that the diagnosis rate will be about 25% of rare disease. That sis quite abig uplift as many of these patients have had numerous investigations before. The main piece of info back to a patient will answer the diagnosis question, such as why has my child intellectual disadvantage. We also give patients the option of additional findings fed back to them. Such as gene changes we know to be associated with other separate diseases but for which management is available. Such as if a high risk of bowel cancer gene or breast cancer gene. Feeding back such info , only if that is what the patient would like and only if its an actionable condition. So not incurable neauro-degenerative disease s , with no known treatment. If parents are considering further children , they can opt for carrier status of certain conditions fed back, such as CF or some X-linked conditions. This is determined in the consent part of the enrollment. Dr Frank Ratcliff. Prooject is about building a UK genomics industry, building on research so we can link medical records to genetics and outcomes. Its also about bringing improvements to patients. 2 videos of people as part of the project A family with a newborn child and they're immediately told that there is nothing we can do to help, serious medical issues and there is no help. But their attitude was that if you just have one day of this life , then plant a seed for others. So they joined the project to help research and the body of knowledge, even knowing that it would not help them at all. A survivor of aortic disection , but has the potential benefit of joining the project. Because if the gene behind it can be found then they can quickly and easily ask whether his sons carry the gene and if they do, then there is preventative action that can be taken. So a potential benefit within the family . A third story involving epilepsy. going round many hospitals and seeing many consultants and multiple tests often daunting, NMR, lumber puncture. At the end of the day undiagnosed and having doses of antiepileptic medicine to try and control the epilepsy for Jessica and that was not working. The only option going forward was to increase the dose, powerful medicines which have significant side effects. The family had the chance to join the 1000,000GP . The questions were can I get a diagnosis, can I get a treatment and for the parents, if they had another child , would that child have a normal risk of eplipsy or the same risk . A samll amount of blood taken from the child and the parents, whole genome sequencing of all 3 people. That produces an awfl lot of data. We have about 3 billion letters in our genome. 20,000 genes are about 2% of that , swimming around in there without any punctuation or paragraph marks . If we pick normal healthy caucasian males there would be about 3 million differences between us. In Jessica case about 6 million differences , something she has but neither parent has as both were healthy. Out of the 6.4 million differences, 700,000 were known to be rare , about 3000 would affect a protein , 67 were not shared with her parents and 1 was linked to a gene previously associated with Jessica's symptoms. It was a gene that encodes a protein which moves glucose across the blood and into the brain , a glucose transporter. If you can't move glucose into the brain , the brain does not get its normal energy source and so the symptoms of epilep[sy were symptoms of hypoglycaemia , as a diabetic would have. Only 500 cases of this gene known globally , so for almost any clinician they are unlikely to see 1 such case in their life. Certainly not 2, so without a large database, no clinician has any learning to go from . There is out there in the literature some evidence of a treatment. We can make our own glucose from fat , if you don't have a source of sugar. So she was switched to a low carbo, high fat ketogenic diet , which provides an alternative energy source for her brain and she is now a lot better. She has some symptoms from a brain starved of glucose for her fisrst few years . Somilar to Atkins diet but goes a step further. Her genetic changes are not shared with her parents. Perhaps they were healthy becasuse it was a recessive gene and both passed on that gene and so a 1 in 4 chance of a child haveing the same outcome. But not the case, it was a spontaneous arriving mutation, so now the parents are confident that if they have another child there is little chance of a similar epileptic symptom child. Not always such good news but shows what can be done. Do you think its an opportunity to take control of your health or play ostrich and hide your head in the sand. The clipboard passing round is Would you like to have your genome sequenced or not. No one has put down "No" in this outing for this survey. A printed volume for chromosome 21, one copy, the smallest chromosome we have. Printing is double-sided , narrow margin, 4 point font. Q: how do you know its right? The first time we showed one of these books, at an exhibiotion and someone turned up and looked at it and said there was a mistake, pointing to the precise place. .......... Its printed upside down. The printers had bound a page upside-down. That is called a DNA translocation , and that can also cause symptoms. The genes that encode for protein in this printing are in uppercase. If the whole thing was printed, in the same print size, it would be 130 volumes. This chromosome is the standard one, normal, for academic research purposes. Q&A So why do most people think having their genome is a good idea and why a couple of people do not. For If you have huge numbers of datapoints , it would be interesting to look at gene type clusters. One group of people who appear to be perfectly normal and another group also normal, but not quite the same. Could that show some kind of evolution ? So you would donate your genome for research purposes, to improve the knowledge about what is normal. I think a number of people join the project for that reason. If it costs 1000 dollars to get it done , then why not get it for free, assuming you're allowed to download the raw data.? You have to pay extra for the raw data but you can get it. No one has asked for it yet, so I don't know what the charge would be. The difficulty with that 3 billion bases on a disk, is having the bioinfomatics pipeline to interpret it. Aren't there sites out there to interpret in some , maybe limited , way? So if you have your genome on a disk, you could upload it somewhere to ask whare are the diffeneces in my DNA to everyone elses. The challenge then is the difference between you and me , there will be about 3 million differences. You'll find 3m changes , but how do you find out what maters. Thats where you need clinical skill and a bucketload of medical history and case notes. So perhaps a wikipedia type structure listing all the relevant changes? So what is relevant, without medical history. So SNIPS which are Single Nucleatide Polymorphisms , locatins in the genome where we know there are differnces between various populations. Some SNIPS are associated with propensiies for certain disease types , so 23andme is like that. Send a swab off to them and get a SNIP report. We don't use them in clinical practise at all. So its difficult whan someone says to us, can you interpret the data from 23andme, when its not a test we use. It can be interesting , but not enough knowledge around it to have proven clinical utility for NHS. On the nothank-you side I could go into a long spiel between the difference of the agenda of the patient compared to the doctor, which are very different. Essentially, the reason is , if I'm well , I've no interest in getting any investigations done, or even having anything to do with any doctors.? Personally I'm also not unwell and would take that view as well and say I'm not sure that I want to know that I will get cancer in 20 years time. Because I'd be eating well, not smoking , exercising regularly anyway. So it wouldn't change what I do. Its also the fact , that you get older, you will die from something and so don't worry about your health and wasting time going to doctors and spend more time getting on with your lifre? There is a balance, some conditions are so much more treatable than they were . I said no , partially to get a response because I'd quite like to know what could be useful to me. I'm not convinced at the moment that they are predictive tests, coming from genes. Which meant I could look out for the bowel cancer in 10 years time or become aware of an issue that I would suffer from? Many patients come forward because they are in a very different situation, they know they have something to be found, even desparate to find out. The people we talk to , who are not affected by rare disease, for them the concern is more paramount because the question is lesser. We probably do sonmething like a risk/benefit analysis but not putting it in those terms. If there is no benefit to you or your family then there is just anxiety. Will they discover a gene for anxiety. Will I get Parkinsons when I'm 50, do I want to know whether I'll get Parkinsons at 50. but if one of my kids was ill , then #all those such concerns would go out of the window. So many illnesses are a combination of genetics and interaction with hte environment, how far down that route.? There are a lot of conditions that we have a genetic risk and then its to do with the environment whether we actually express that condition. The project is not looking at the environment and we are looking for really strong genetic factors or absolutely causitive ones but a lot of work to be done on that. Would that be epigenetics? We're not looking at epigenetics Genes change throughout a lifetime , things turned on or off , so epigenetics.? Not just epigenetics. I only made this arm once , all the genes needed to make arms were active only once. Teeth I need twice and hair I need all the time. So different genes are switched on and off at different times and in response to illness a whole different suite of genes is switched on and that is probably a key part of the lower case text in this book. Between genes are the switches, some are on/off switches some are dimmer switches. But most of our genes are not used most of the time. I was also wondering about the spontaneous chenges throughout a lifetime, maybe you had a gene sequence as a child , would you want one when you are 60? At the level we are looking at the moment, we wouldn't find differences like that, apart from sequencing a cancer genome, which definitely would be different. I read something about a study of identical twins , presumably originally identical, but they had changed due to different lifestyles and environments? We accumulate mutations. If one twin smoked , then they would accumulate mutations in his lungs much faster than the nonsmoking twin. We would not sequence people at birth and then later but may sequance a tumor and tissue from the same patient, spot the difference. Are you suggesting you can resequence parts of our body ? The cell sequence of cells in my left hand should be the same as my right hand, but if I had lung cancer in one lung then the sequence of that cancer would not be the same as the sequence of the other lung, because cancer is a genetic change that causes undifferentiated cell division. Bu tin terms of medicine it is possible to get in there and resequnce the lungs? When you say resequnce , when we say sequencing we mean identifyig what the sequence is, we cant go back and change , we can't edit it. We can't set it back to zero. That sort of technology does not exist? It probably is coming , but not currently. With cancer there is multiple genetic changes in a tumour . Its not a question of a single mutation and then you get cancer. Cancers are dividing rapidly and accumulate additional mutations all along the way. Its a complex catch-up trying to keep on top of a cancer's mutation load. If it was a single mutation then that would be easier, but its constantly evolving. So another interpretation of the term mutation, is gene change? Exactly. I think I've heard that racial differences are in fact just due to very small differences in the DNA. So if we compare an Eskimo with an Aborigine , are there big difference, does it make interpretaion difficult.? The differences are genetic, why eskimos look like eskimos. If you took a rare disease family as an example . Go back to 100,000 genomes being sequenced , about 50,000 in the rare disease arm. That is something like 17,000 patients plus 2 close family members. So the comparison we are doing is between Eskimo child and Eskimo parent and then play spot the difference. Rather than comparing an Eskimo with a Glaswegan. The proportion of your DNA that reflects your appearance is a tiny proportion. I expect there is more to racial difference than simply appearance , but your saying that even so, the differences are relatively small.? Yes. And for the patients , the comparators are close relatives, so they'd be sharing almost all the genes and thwen saying, which genes are similar between patients with the same diseases , that are not present in patients without the disease. With cancer patients you took cells from the tumour and DNA from something else, bilateral or ??? You want germline DNA , DNA you're born with and that is in every tissue of your body, its just blood is the easiest one to get hold of. We use the DNA in white blood cells and compare it. Is there work to take non-invasive tissue samples? For the types of investigations of patients we are seeing , in the scheme of things, a blood test is relatively non-invasive. There is some work to see if you can use saliva , but the DNA from that is not so good quality. We sometimes try that if we have a needle-phobic patient. Or children , it is difficult to take a blood test from, we sometimes have a stored DNA sample. For the kind of tests I do , a sample is stored pretty much for many years. How much blood? We take 4 tubes with a few mL in each, so about a tablespoon each. Is this process limited to 1 to 1 mapping , 1 gene to 1 condition or is there perhaps a mathemtical limit on how many multiple genes apply to one particular condition? Primarily we are looking at conditions that are monogenic, one gene for the disorder. We are also learning that more and more conditions are pehaps polygenic in 2 or more genes. The work I do with cardiac genetics we're seeing that quite a lot. So 1 significant risk factor gene , then another gene variant and added together they may reach a threshold effect, whereby you get the condition in question. Is it possible to have say 10 genes affecting one condition, and you wouldn't actually pick that up? Absolutely, look at height, perhaps controlled by 100 genes and childhood nutrition as well to complicate things. But then mapping back which genes it is , to the height is too complicated withing the scope of this project. We know some genes, say classical achondroplasia or pescle? dwarfism , a single letter change in a single gene, you go from an adult of average height to someone with achondroplasia. But other conditions like Coronary Heart Disease ar edue to factrs in multiple genes that are additive and work together. different conditions work in different ways. Many common diseases are due to multiple variants in many genes . A slide covering peoples responses from other such talks as this. Would you want to know your risk to disease or would you like to carry on enjoying life. Would you want to be reassured , on the flip side, if the answer was you're to be healthy. How would you share the info with family, if you needed to ( I've got 2 kids but also 3 sfiblings , if I was sequenced and found out that I'm likely to get Parkinsons when I'm 50, what do I tell my brother and sister , because 50:50 chance they share it as well). Insurance implications Would it change your self-perception , your behaviour , your lifestyle and should it , as we all should be living healthy lives anyway. Have insurance companies started taking an interest in this? There is a moratorium at the moment and there has been for a long time . At the moment they all comply with the UK moratorium in that they will not ask if you have had a genetic test and they certainly will not ask for the results. But they can ask the simpler question, do you or your parents have any of the following conditions. So they can get genetic info , without asking about any sequencing been done. That is only a voluntary moratorium, its not statutory law , so they could decide not to follow it and call the government's bluff. Is that true around the world? We don't know the answer to that. How might primary care change? There will be some diagnosees that emerge , that will have a small impact on primary care, as many don't have rare diseases. In time with work on pharmo-genetics , then in primary care, this particular drug , normally prescribed 20mg , but this patient ,post sequncing , would require less drug . In the longer term there would be more tailored drug policy. Its not just sequencing us but sequencing the disease. A few weeks back I was holding a small DNA sequencer produced by an Oxford company , it fits happily in my hand, linked via USB to a laptop . Uses a tissue fluid sample , its not running whole human genome sequencing. They took that out to west Africa last year, and sequencing patients to assay whether this sample contain Ebola. In the situation there, someone would walk in and say I might have Ebola. Up to then, that person would sit in a tent and if your still standing up in 21 days , you didn't have it. If you did have it , then you're dead. With the sequencing , you can sequence for the infectious agent and in 2 hours you can say , positive or your free to go. Treatment situations go from the likes of Ebola to have you a viral infection , so take some paracetomol, or have you a bacterial infection then we'll give you antibiotics and by the way we know which antibiotics will work. What about long term conditions and things that are more multifactorial, you have a percentage risk of something ? There are some subtypes of diseases where the management will change. Diabetes, we used to think was type1,early onset, and type2, later onset associated with increased weight. We are now learning there are certain subtypes of type1 for which the treatment is different. We had a boy at 16 diagnosed with type1 , given insulin 4 times a day as his treatment. His blood sugar control was terible, a huge impact on his lifestyle. There was a family history of diabetes and eventually they had genetic testing and found out he has MODY or mature onset diabetes of the young. And the right treatment for him was not insulin , but sulphonoreas? an oral medication . So he came off 4 times a day injection , and his blood sugar control is much better. I thuink we'll be substratifying some common diseases. like this. Learning more about tailored treatments, and that would filter down to primary treatments. Is the project part of interfering with nature? Will the genome technology , is there a chance of it interfering with nature's natural processes? , ??? dilution? Elligible adults can choose to find out as a result of this project, whether they are carrying a gene that does not givbe them symptoms , because its recessive and if they are carrying the gene , they could also have a potential lifepartner carrying that gene also. They could then make reproductive choices , knowing of the 1 in 4 chance of a child having that monogenic trait. So yes it is possible to change natural processes. We have to accept that medicine is interfering anyway? There was a recent article about people having Caesarians , are interfering with nature, as there are now a lot more people with narrower birth channels, than there were before? So selecting against. With small handheld devices in the future , do you see a point where DIY home testing will become cheap and accessible enough for people to hack around sequencing? The Oxford Nanopore ? , I don't know the cost, but wouldn't it always be cheaper to just send your sample in. You could go around sequencing all sorts of stuff, ants , beetles? The Startrek Tricorder . There's bound to be people who'd like to mess around hacking this stuff, these biomes, a new hobby? It only tells you what's there, it doesn't enable you to change anything Garage biology. Prior to the genome technology, I never found out whether in the medic community or researchers themselves , whether there was ever a system equivalent to Google search engine , where with exact medical terminology for clinical expression of some condition , otherwise an unknown rare disease . Put in some database search engine and come out with possible diagnoses.? You go to a medical clininician , fully versed in all the correct terms, bung in the clinical features and out comes , perhaps ranked, possible conditions? There are a couple of databases , that are free to use. One called OMIM, Online Mendelian Inheritance in Man. Its not as refined as you were suggesting, but you can put features into that and it will give a list of a number of genes. In clinical genetics we use databases such as the London Database of Genetic Conditions and we can do just that, enter perhaps 5 features and get it to tell me all the syndromes that have those things linked to it , and a drop down list. They ar e under licence and expensive to use. That would contain all the medical literature , going back to the 1970s or 60s ? There is Pubmed available to the public. You can but in the searchbox , the relevant feature , and it gives a list of publications that have those keywords in. Extending on from that, is there a halfway house , for just ordinary people, a sort of reverse dictionary. Those facilities are great if you have the exact medical terms and specifics of clinical details to search on. Is there a haflway house , where an ordinary person can put in vague ordinary English terms and get out specialised medical terms for them? I'm aware of some specialised terms like subluxation , supination and pronation which Joe Public wouldn't know , but you could eventually zero in on those exact terms and then progress to OMIM and Pubmed? Part of my experience as a clinical geneticist , is learning which of those terms are likely , when confronted in a patient, could mean an underlying diagnosis. I'd talk to them and ask about features that I know to be related to that kind of disease. So a lot of experience and also the tools . There is a system trying to unuify the descriptive terms that are used, HPO Human Phenotype Ontology, terms and trying to standardise those across databases and websites, so the communication between doctors is clearer. The money alowing this project to progress, where dioes it come from and is there a chance of selling the data date derrived, to those , the NHS will be producing responses . So the economics? The project is funded by NHS England , in the region of 550 million. A lot of money, but the people with rare diseases have a long term condition and especially if you can impact them young, its not difficult to imagine the cost savings, saving the project eventually. 11 paharma companies have already paid for access to the data , a quarter of a million pounds each. Significant sums but not against 550 million. They've paid to see an anonymised version of the data and to run some analyses. They can't copy the data , they have to run their analyses on NHS England servers. So the model is that its a reading library , not a lending library. Then if they discover anything , they still don't own it . They will still have to buy anything that they then discover off NHS England. At a price that would reflect the value of that discovery. So not one price fits all . The aim of the project is not to make money , the aim is for patient benefits. The NHS is not in the drug dicovery game . I was just concerned the NHS could discover something very valuable and who would share those profits? The knowlege we accumulate will be exportable . We're leaders in genomic education and we're being asked , by others around the world, to teach and share what weve learnt. There may be a revenue stream there, but not the really big sums that drug companies make. So what happens to the data, who can see it. THe patient identifiable data , with a name on it, only comes back to the clinical geneticists who are looking after that particular patient. Anonymised data is visible to 11 pharma cos, but also visible to groups of academic researchers , registered to use that data, which is medical history and genetics but anonymised. So its not just drug companies that can make important discoveries. That data refers to one particular individual or a collation of loads of peolple? The anonymised data is all of it, so you can start doing comparisons between whole groups of people. If there is a disease that only occurs 500 times globally, you need to look at all those datapoints to find the commonality. How do the researchers approach this? The research groups are all based around a clinical disease, called GSIPS , Genetics Interpretation Partnerships. They will ,say, we're researching the genetics of asthma, they'll register as a consortium or collaboration , for access to do that , in order to research asthma. They cant then go off and research diabetes , which someone else ids registered for. Importantly, when patiens join the project, part of the consent process , the 40 minutes. Part of that is to give consent for academic and commercial research. If people change their mid=nd, they can withdraw later, even after being sampled and sequenced. After they're own personal result, they can then decisde to withdraw, everything would be withdrawn. ??? to make their results public and be exploited. I wondered if a similar thing with the NHS. Its in the public interest to make the info public ???. ? This dat ais not publically available. Its held on NHS England servers. You can't copy it out. If you want to make a discovery, you have to write programs that will work on thise swervers. Send your program to them , they'll run it and send you the results. But they know what your results were , they know what your program was , what your research interest was. The same whether you are an academic or commercial researcher. What they charge later, may be different. Could an academic group ??? could follow ??? They could be doing exactly the same thing. It could be a straight race. That means patients would get the discovery quicker . This seems to be a British database. How are we getting along globally, as presumably other than Germany, there are people doing this. ? There are big databases of normal genomes , like the EXACT ? databse . We often use that when we find a variant in someone's genome, we look there and see if its found in the normal population. The databases are being produced in a compatible , workable way? Ideally all this data would be shared on a central server and thats not happening. In this country there has been silos? of data about different conditions. The cardiac world that I know about, there was different research groups had their own silos of data. This project is about sharing the data because its so much more powerful if its shared. So you're saying we ought to be working towards it but we're not doing very well ? I say it ought to be universal , I think we're good at it in England and this project will improve that more. THe project is initially in England rathe rthan the UK. This project is a global leader. There are other projects, in the US , Canada and France which will do similar work but they're not so far advanced as this project. Hopefully they will use databases that talk to this database. What happened to the Icelandic database? This was Decode Genetics company formed about 10 years ago. The govt noticed they had a highly homogenous population , a lot of in-breeding . So they had very good medical records and births deaths and marriages going back 800 years in writing. The govt formed that company to analyse that data , find inherited causes of disease and very controversially , they set it up us an opt out system. So they said to all Icelandrs , we're going to commercialise your medical data unless you tell us not to. Instead of it being an opt in system which effectively thios project is, come and join if you want to. They had a high number of opt outs and I'm not sure that it got them very far. If anything perhaps a model of how not to do it. In the sequencing, you always get 2 letters per chromosome. Is there a technology that would sequence each individual strand and would that be helpful to identify diseases? You always know what the other one is, because they always pair. If you're reading an A then on the other side there will always be a T. Same C and G. And the reciprocals. So you only sequence 1 strand because you can always infer the other. But each locus could be twizzled round either way, so you don't know an individual strand, you don't know which letter belongs to which strand? You know what order they appear in and the orientation , where a gene begins and ends bedcause there are certain sequences in the DNA that always occur at the beginning. So if I got one change on 1 strand and another letter change, I would not know they were on the same strand or different strands? You also don't know which strand is the coding strand because the gene could be on one side or the other. When you get a DNA sequence , are you sequencing just 1 strand? Yes, but you can always infer the other, from the complementary nature. You have pair of chromosomes , sorry i should have said chromosome rather than strand, one of the pairs? So rephrasing. Each chromosome is 2 pairs , 2 letters , but can you figure out which letter belongs to which pair and wiould that be useful for picking out diseases? Yes , you probably can because we have about 3million differences between us , so with any particular family we are sequencing , you will be able to identify which chunks of chromosome have come from the mother and which from the father . The DNA of each parent is mixed , not totally randomly , but in random blocks And does that make a difference to disease progression, 2 mutations on one chr as compared to 2 mutaions on different chr? Some ar edominant so yes. If you think of a simple model of DNA encodes a protein that does something. We are all protein, we are either made of protein or stuff that isn't protein but was made by an enzyme which is a protein. So if you make a small change in the complete chr, the most likely outcome is that you've broken it. Its very difficult to make a change that mends it. But I've 2 copies of everything, one from mum , one from dad , so for a lot of disease , as long as I still have 1 working copy, I still have an enzyme that does something and you don't notice. So the only issue arises when I get a broken copy from mum and a broken copy from dad. So vthat's now recessive mutations , and if I pick up both of them, 1 in 4 chance , I then have no worjking copies at all and then I get the disease. I was thinking perhaps you needed 2 changes to break the copting process? They can get beastly complicated . So go home and talk to othe rpeople about this as it will change medicine, change how disease is treated and that'll only work if the public accept it. W56/1/40/10

Monday 09 Jan 2017, Prof Gavin Foster, NOC Southampton: How hot will it get? Climate change insights from our past. 41 people, 1.75 hr In the run-up to the Trump election , Trump was asked in a radio interview. Do you believe Earth temp is increasing and what would yo u do with respect of climate change. He said he was not a believer in global warming and not a believer in manmade climate change. This view is quite pervasive in USA politics. I think its founded on a whole bunh of misconceptions. On how the climate system works and what we know , and what we don't know about that. In Dec 2015 Ted Cruise said the current computer models , used to understand global warming trends are profoundly wrong and inconsistent between evidence and the data. I want to show here that he is the one who is profoundly wrong. When we look at the climate models , they do do wquite a good job of matching the warming that we've seen over the last 100 years . Relative to 1880s to 1900 graph, relative anomaly to that time. Gives what the climate models give, given the forcings on climate , that we've reconstructed. A pretty good match. For the last year , we're bang on the middle of the prediction spread. Thats not to say there are'nt some legitimate reasons to question the climate models. They are not perfect representations of the climate system, they are in massive supercomputers. They break the Earth into little grid squares and try solving the equations of state, for each of the cells. Because they break the Earth down into squares of a couple km across, they're omitting some parameters , like how clouds and rain form . Its particularly how these paraameterisations that make the models less than perfect. There is a lot of tuning that needs to be done, they don't like that word, where you tweak certain parametrs to fit observations. The historical record is used to tune the model so it fits the rest of the record. But they do a reasonable job of simulating the climate. The multi-model mean from the last IPCC report, the average of 36 or 46 climate models. The result looks like the temp distribution over the planet in reality. Also included is the difference between the observed and the models. It looks about right but in detail there is up to 3 degrees differing. Then between models there is about 3 deg C differences . In one way Ted cruis eis right , we do have to question this model. Then when we use these models to predict our future. 1850 to 2300 from the last IPCC, the observed tem prange and differnt scenarios of how warm the Earth might be. The bounds reflect the uncertainty in those predictions. It is legitimate to ask, how reliable are these projections for our future, given we know the models aren't perfect. Predicting future climate is not just about climte models. Climate science is old 150 years. I want to tell you about what we call equilibrium sensitivity, a measure of how sensitive the Earth is to change. I'm a geologist and I want to use the geological past to tell you how we can test this understanding. The central tenent of geological theory is that the present is the key to the past, the uniformitarian principle , the leading light for geological research. That means we can study processes i nthe present and it tells us how rocks were deposited in the past. I want to turn that on its head, and is the past the key to ur warm future. What can we learn from looking at the climate of the past. The main driver for the climate system of Earth is the Sun. All the energy that drives the climate system. We get about 340W per sq m, to the Earth surface. Of that about 100W is reflected back and because the Earth has clouds and ice-sheets. In 1820s Joseph Fourier used black body radiation theory , still a valid theory about how things respond when radiation is shone at them. He calculated that with 340W coming in and 100W bouncing back, the effective temp of the Earth should be -17 deg C. What he knew then , and us now, the average Earth temp is higher than that, at 16 deg C. The difference of 30 deg C he put down to the blanketting effect of our atmosphere. That was in 1820, we call it the greenhouse effect, he did not call it that. In the case of the moon, with no atmosphere, the effective temp of the moon is about -0.5 deg C, due to having a different albedo to Earth, grey rather than blue so reflects more incoming radiation. On face facing the Sun about +120 deg C and other face about -150 deg c. Thats what Earth would be like without the atmosphere blanketting effect. The difference between 340W and 100W is absorbed by the Earth. This is short wave radiation coming in and radiated back as heat ie long wave radiation. That heat is then trapped by greenhouse gases in the atmos, some gets emitted out the top , some gets emitted back down and heats the Earth surface and cycles on. The E in radiative balance , so incoming equals the outgoing. We are not relying on climate models to show the power of CO2 in changing the climate. They are relatively straightforward physical observations , known for a long time. A plot of wavelength of light coming in fro mthe Sun . Most of that Sun radiation is in the visible part of the spectrum , thats why we see in the visible part of the spectrum. Short .5 micron wavelength, then a long tailoff. Also included is what we measure at E surface. Its not the same as what is coming in, because the atmos is absorbing some of that short wave radiation. We can do the same thing, looking down at the E surface, for the long wave radiation coming out. At the E surface the spectrum coming out and then at the top of the atmos also . A lot of stuff is missing that is trapped by the atmos. That radiation is heat and in effect this is the greenhouse effect (GE). The difference between the 2 curves is the GE. It can be measured with relatively easy technology, no climate model needed. The different gases absorbing different particular wavelengths of radiation . Methane absorbing , Nitrous Oxide differnt wavelengths, CO2 , water vapour with lots of absorbtion spectra a powerful G gas. Sum them together and that is the amount of radiation being absorbed by the atmos. This was recognised by John Tindall, used to teach at Stockbridge School, before leaving to Germany and becoming a great physicist. In 1859 he determined the GE was predominently down to water vapour and what he called coal gas which we call CO2. This understanding of how the GE and atmos worked was really honed in 1960s and wanted to shoot down aeroplanes with heat seeking missiles. You need to understand how heat is absorbed i nthe atmos in order to target planes with missiles. How the various gases in the atmos changes the heat capacity was tied down by the military. The same physics that underpins climate science , underpins the building of missiles. So which of those gases is the most important. In terms of driving the GE , water vapour and clouds drive about 70% of the GE. CO2 and other gases being about 25%. Anyone boiling a kettle or having a shower on a cold day , knows the amount of water vapour in the atmos , depends on the temp. The higher the temp , the more vapour in the atmos. Water vapour responds to temp change, id doesn't drive temp change. You can't drive changes in the strength of the GE by changing water vapour, because water vapour only changes , if you change the temp. Change the temp, the vapour will make that change bigger , but you can't drive changes in the GE . You can only drive changes in the GE by changing CO2 and the other non-condensing G gases. Those gases stay as a gas, regardless of the temp of Earth, or at least over normal human-friendly conditions. So we're burning lots of fossil fuel, burning lots of trees, making lots of stuff out of cement. The consequence of that, is CO2 has rocketed. Since we've been measuring atmos CO2 its gone 320ppm in the 1960s to 404ppm last year. The more long wave radiation , radiating from Earth surface, being trapped by the atmos, must mean heating up of the atmos as more and more long wave radiation is being trapped. Data visualisation from 2016 , by Ed Hawkins of Reading, called the climate temperature spiral. Showing how the temp has changed over time. In 1850s1860s, early industrial times , 0 degrees, then in the 1950s/60s it starts to kick off. In early 2016 we were touching 1.5 deg C. That was mainly due to the El Nino . Regardless of the inter-annual variability , the temp has been incteasing as that CO2 has been increasing. One way scientists talk about sensitivity of the Earth to CO2 changes in the GE is Equilibrium Climate Sensitivity. It is a useful metric for how the Earth works and how climate models work in comparison to the Earth. ECS is the mean surface temp change for doubling of atmos CO2. You have to wait for the system to play out, to reach the new steady state. By that, all the changes that are going to happen , have happened, its reached equilibrium. If we look at the radiative forcing , the change in the radiative budget of the Earth. Doubling of CO2 is only 4W per sq m, a small amount, but will have a dramatic change on the temp. So take a ball of rock floating thru space with a thin atmos with a bit of CO2 in it . If we double atmos CO2, we have radiative forcing and we have a temp response of about 1.1 deg C. Thats known as the Plankh Response , based on the black body radiation theory that Fourier used and Stephan Boltzman used it in 1878 co calculate Earth climate sensitivity. Its missing out a lot of processes that happen on the Earth , it has plants, atmos with warter vapour ,oceans . And if we double the CO2 on Earth we don't really know what the response is. One of the main uncertainties in climate science, we don't know how sensitive the Earth is to CO2 change. We have some good ideas but don't know exactly. Partly because it is a very complicated system and we are essentually a water planet. If you increase CO2 , temp goes up, then you evaporate oceans more , wv in the atmos goes up , causing stronger GE which causes the temp to go up. That continues to a certain degree , in a positive feedback loop , or a vicious circle. A bunch of these, not just wv. Sea-ice , land-ice , clouds, peat-bogs, soil carbon etc, all different but +feedbacks. It has -feedbacks as well , but the net efeect of changing the balance is a + amplifying effect. This was recognised by Psente Arhenius ? in 1896 he published a paper On the Influence of Carbonic acid on the air, upon the temp of the Ground. He was a chemist, getting a Nobel Prize in 1906 on electrolytic theory of dissociation. He was not a climate scientist , douing that s a hobby. In Victorian times , the big question , why do in Scotland , Norway, places like the alps, why do we see evidence of there having been a recent cold period. They knew that quite recently the Earth climate had chenged from the warm climate we are now , back to something colderr. We call that now the Last Glacial Maximum about 20,000 ya, ice stretched down to Bristol . In the Uk we see lots of U-shaped valleys , including striations where glaciers scraped the rock clean leaving scour marks. On likes of Salisbury Plain you get erratics , even Medbury on the South Coast. Deposits left by ice etc and late 1800s everyone wanted to understand the mechanism behind this. He looked at what changing the carbonic acid content of the atmos on Earth climate and could this explain we we have cold and warm periods. In that paper he recognised the actions of humanity would increase the CO2 content of the atmos and warm the Earth. He was from Sweden , which is cold and he though that was a good thing. He even wrote to the president of Sweden to say this would be a good thing to do, so we could grow grapes in Sweden. That ice-ages could be brought about by decreasing the CO2 contrnt , 60% of the present value, pretty close to what we think now. He recognised if we double atmos CO2 we'd warm the Earth surface by 5 deg C. That was in 1896, over 120 years ago and thats pretty much close to what we think niw. He used a slide rule and a year of dedicated maths, we now use a super-computer , interestingly takes about the same time. The calculations are now done on increasingly small grid squares on computers like the NASA one. We find despite the increase in computational power we are no closer to narrowing down the uncertainty in this parameter ECS. A plot against time of use determinging ECS. Stephan Boltzman 1878 , Arhenius in 1896 a gap and then 1979 the first climate models came about and the Joules Charney? report with a mean of about 3 +/-1.5 encompassing the range of Boltzman and Arhenius. Then with the sets of IPCC reports we've not really refined that range. The last IPCC report puts is at about 1.5 to 4.5 deg K per CO2 doubling. The vague band in the plot is the uncertainty in the climate models. Each model hasa different ECS, its an emergent property of the model. Its not a number you choose , it emerges from your complex model. Some models have a low sensitivity and track low, some high sensitity and track high. The uncertainty in our future depends on how much CO2 we burn, the choices we make and also the sensitivity of the climate system. We could have a very sensitive climate system and then if we do a lot to mitigate CC then we still end up with a lot of CC. Alternatively with a low sensitivity , we can carry on burning a lot , it will have dramatic effects but it won't be as bad as if super-sensitive. My research is about , using the geological record to try and predict and understand our warm future and in particular to better understand climate sensitivity. How climate over the last 140 my has evolved . Over geological time the climate has changed dramatically, through natural reasons. Video - Break up of Pangia the supercontinent. Early Cretacius with dinosaurs , then moving forward to a world more like our own. America breaking from Africa, India moving up to collide with the Himalayas, the atklantic opening up. All these things change atmos CO2 and global temp. Moving through the last 30my the temp is yellows and oranges , then about 18/19mya the temps in the oceans are about 34 deg C, now the warmest ocean temp is 28 deg. For Antartica about 34mya and the north hemisphere about 10mya you start to see the fisrst appearance of ice. The Earth occupied those different climate states, so if we can understand what caused those different climate states , we can better understand how the Earth responds to changing CO2 and changing climate. The Earth cycles through these ice-age/greenhouse states quite regularly about every 500 million years. A supercontinent forms, breaks up, reforms etc. for about the last 3.5 bilion years, due to having a convective mantle that is driving plate tectonics . During a supercontinent break-up phase a lot of CO2 is coming out of the solid earth at rifts , into the atmos. Lots of volcanos and so a high CO2 warm GE climate. When the continents are coming together , forming mountains, less volcanos . Mountain building is a sink of CO2, the clay minerals that are formed in mountain rivers remove CO2 from the atmos. On top of this grand 1/2 million year cycle , there is cycling of the climate on shorter time scales , due to how the Earth orbits around the Sun . This was recognised by Mulacan Milankovic ? , trying to understand why we had cold climates relatively recently. In 1920 he proposed that glaciations were driven by orbital changes of the Earth. The Earth is influenced by the other planets, every 41,000 years the tilt of the Earth changes from 24 degrees to 22 degrees, a change in the elliptic nature of the orbit, and the way the Earth spins , the precession like a spinning top . It has little effect on the amount of sunlight reaching the Earth but the distribution of that sunlight , through the year and where the maximum insolation is, changes as the orbit changes. Examples of cold orbit and a warm orbit . A cold orbit is when there is a small tilt , the north hemisphere is colder . A warm orbit the northern hemisphere is tilted towards the Sun i nthe summer. These orbits affect the local temp in the north hemisphere. In a cold orbit phase you get some ice-growth one summer , that ice stays , that increases the albedo because it is reflective, more sunlight is reflected and then on a global scale via a bunch of feedback effects also causes atmos CO2 to come down. That all leads to more cooloing , more ice growth, more CO2 stored in the ocean, more cooloing and so on. When we have a warm orbit, the ice retreats, decreases the albedo, less CO2 stored in oceans, more warming ... These orbital cycles are quite a small influence on the earth's radiative budget, but through the bunch of feedbacks they can cause dramatic climate change. A map of what the Earth looked like 21,000 years ago , the UK icesheet stretching down to Bristol. All N America is covered in the Lawrentide icesheet. So much ice was locked up in thr northern hemisphere that sea-levels were 13om lower. The temp in Antartica over the last 350,000 years , cold, warm ,cold cold climates quickly go to warm. Those cycles are driven by the orbital parameters. Plot of atmos CO2 that goes with that climate change. Warm climate = high CO2 280ppm, cold climate about 200 ppm. 0 to 60my of Earth history. The oxygen isotope composition of some bugs that live on the sea-floor. This shows climate evolves over time, cold and warm temps. 50mya we were very warm , 12 to 14 degrees, lots of warm ocean . Through time, things have cooled down . That is the transition from a greenhouse to an icehouse . Antartica in the warm state not much ice. About 34mya we had rapid growth of Antartica. Another time interval , the Pliostein, 3mya . Looking at the range of climates from cold to warm including a bit warmer than today and then super-warm climate of the Ilioceine . The climate predicted for 2100, looks a bit like 3million ya, not as warm as the Ileoceine 50mya. So looking over 50my we are sort of bracketing a possible future. Its not that easy to reconstruct climate of the past. Today we can go out and measure CO2 content even with satellites. We measure the land and sea temps . How to go back in time and measure the same parameters. We have to use fossils , phoromonifera? single-cell protis lives in the surface part of the ocean. About 0.3mm across , make their shells from Calcium Carbonate , like chalk and a lot of rocks. A lot of rock is made from the dead shells of these organisms. We measure the chemical composition of the shell and reconstruct various climate parameters. A pic of the UK with a blue patch , a bloom of plankton seeen from space. thousands of billions of individuals forming enough of a colour change of the ocean to be sdeen from space. When they die, they sink thru the water column , marine snow, accumulates in vast quantities on the ocean floor at about 1cm per 1000 years , an ooze of this dead calcium carbonate. We take a research vessel with a big drill rig on it , put a core into the deep sea . It can operate in many 1000s of metres of water and drill a core many 100s of metres drilling back through the time of sediment layers. We can work out the age of when it was deposited , recover the shells , take them to the lab to do analysis on them. We can work out their chemical composistion and things like the magnesium xcontent of the phorams tells us the temp of the water in which they grew. A correlation between Mg content of the forams and the temp. Then we can work out ocean temp. Most of my time is spent working out atmos CO2 content from the past, something that is really only done at Soton, not many other labs in the world can do it. We want to know the temp of the earth that had double CO2. There is ice-core record from Antartica . As ice accumulates there, it traps the ancient atmos in the ice. We can take a core , get the gas bubbles in it and work out atmos CO2. One of the ice-cores for 0 to 800,000 years , wiggling trace from glacial to interglacial cycles. But ice cores only go bck 800,000 years, we want to go back 50 my, to look at those really warm climates as those are more like our future. My lab is used for Boron isotopes , take the foruniphera , measure the Boron isotopic composition , that tells us the Ph of the ocean of the past, the acidity of the ocean and hence CO2 . Some Boron isotope data , CO2 data from ice-cores comparison, not perfect but does a pretty good job of an indirect measure of CO2. The main advantage is we can go back 50my, back to super-warm climates. A record for 50mya to 30mya when the earth was 9 to 14 deg C warmer than today. For the interval 3mya CO2 goes from about 400ppm to 300ppm, climate goes from 3 deg C to remps similar to today. The CO2 in 2016 was 404 ppm , last seen on the Earth 3mya and 2100 CO2 would be around about 1000 ppm last seen about 45mya according to our data. When we combine that temp info with the CO2 we can calculate ECS. We want to test what the climate models suggest. 50mya to 30mya period there is a bit of uncertainty in our estimates , the maximum probabilty are around what the climate midels suggest. 3mya again in the same climate model range. And for the ice-core record of 800,000 years we'are in the range of the IPCC models. That means temp changes we see in the geological record are behaving the same way , the Earth's behaviour , is behaving as the climate models would suggest. The models are predicting a certain warm future , we apply that understanding to the past. The temps we observe are entirely consistent with the sensitivity and the CO2 change we reconstruct. When Ted Cruise says that the models are profoundly wrong, that is clearly not true . And according to our assessment of the geologicl past the climate evolution is likely to follow the mean of one of these lines, depending on the choice of our emissions. We will probably move along the centre of one of these bands, maybe on the upper end. There is no doubt the Earth is warming due t the magnitude of the GE, caused by CO2 . It is an old science, we've known about it for over 120 years. The predictions from those early scientists are continually being bourn out by new studies. This hundred years of understanding is encapsulated in the climate models and those models are doing a pretty good job of predicting our future. The geological past is a good independent test of how thise climate models perform. A few quotes. Sherwood Rowland ? , Nobel prize for discovering the ozone hole - George Santiana ? - Those who do not remember the past are condemned to repeat it. At a recent RS meeting , one of my colleagues shouted down a climate-denier questioner. "If you don't believe in the GE try sleeping on the Moon. Q&A You tell us the increase of CO2 we push into the atmos, increases the GE. Presumably the atmos is a kind of insulation layer, the reradiated heat off the Earth stays with us, therefore we get hotter, is that right? Yes. Why therefore if we put more CO2 up there , increasing what I'd call insulation , why is that not suppressing the radiation coming from the Sun? Because the Sun's radiation is short-wave radiation, it goes straigth through. The wavelength here .25 micron to 2.5 micron and that is all at one end. Looking at the absorbtion band of CO2 they are in the 2 to 4 micron , water vapour is 1 to 10 microns. Could you return to your slide where the models are along the bottom and the amount of CO2 along the top. Can you put some kind of scale on the upper one? The top ones represented different human behaviors, can you give us some idea how those translate into the real world, eg which one of those would be the Tokyo Agreement, and how likely you think they are? "Business as Usual" that is burning all the conventional fossil fuels , not just doing nothing but the economies grow and other countries industralise . Then by 2200 pretty well burnt all the available conventional fuels, I think that is 6,000 peta-gram of C. It could go higher , perhaps 11,000 Pg. We've actually come of business as usual i nthe last couple of years. The rise in CO2 has not grown , for all of industrial time the ampunt of emitted CO2 has increased. Due to the switch to renewables , about 10 to 20% of UK energy. The biggest thing is China is not burning as much coal. The Paris protocol is perhaps hitting about 3 degrees by 2100. For successive IPCC reports from the first to the most recent , they were always on business as usual and now we're not, so we can be pleased with that. Built into the Paris agreement is usage of technolgy that does not presently exist. By 2015 there will be net removal of CO2 from the atmos , we can't do that at the moment. We've moved off the red plot. When I started doing this stuff, we were always on the red one. With economic cycles , proportionally how much effect does that have? The 2008 crash did dent the rise in CO2 but had recovered in a year or 2. It was noticeable in the plots? Yes. In terms of growth of CO2 , WW2 was also evident. Those sorts of big changes are evident but not stopping the overall effects. When you were talking of feedback loops they seemed to be mainly positive feedback loops , which do have a habit of running away with htemselves. So something must be stabilising that to counter, what are the negative feedback loops to give a stabilising point? The main climate stabiliser is silicate weathering. The turning of rocks into soil. That creates clay minerals, that mineral won't contain all the ions that were contained in the original rock. A lot of those ions move to the ocean , where they stimulate activity , locked up in those shells basically . Those shells then arrive at the deep sea and then they go into the mantle. Then they come out of volcanos. The rate that rocks turn to soil is temp dependent. So the natural way in which climate regulates . If it gets warmer you get more soils formed, more weathering , more ocanic ion deposition, more CO2 drawn out of the atmos and put in the Mantle. When CO2 is low and low temps then the opposite happens. The natural rate of CC , or CO2 change at least is about 20ppm per million years and we are doing 100ppm in a century. We are doing 1 or 2 ppm per year which is a million times faster than nature. The natural negative feedback isn't there . I believe in Iceland they were trying C capture by pumping CO2 into ? and it was solidifying . Is anything like that going to have ?? Thasts the sort of tech that doesn't currently exist in a commercially viable sense. Thats exactly what we need to decouple economic growth from CO2 , is to put the CO2 in the ground, carry on burning it , but capture it an put it into rock. That was Matter from Ocean and Earth Sciences of Soton who led that study. They put CO2 into basalt in Iceland and within 2 years they'd removed 90% of what they put down there,. I'm not sure whether economically viable yet . The economically viable version of that is C capture and utilisation , making sodium carbonate for making glass so stable to lock up CO2 for 1000s of years? Yes in India . There is awhole bunch of these diferent technologies that could perhaps save the day. When its acidic , animals die? Ocean acidification is known as the second CO2 problem. The processes are better understood, the impacts on organisms is less understood. CO2 is an acidic gas, it dissolves inthe oceans. The oceans have acidified by about 0.1pH units , doesn't sound much, but its a log-scale so a 30% increase. The concentration of H ions in the ocean . It is a ffecting things like coral reefs and shellfissh shells in various parts of the coastal ecosystem. They are more threatened by the temp I think, especially corals. Its the temp rise thats killing them, not the acidification. What are you most worried about? The ice-sheets , for me, is the biggest threat. They respond so sluggishly that the big continental ice-sheets of greenland and Antarctica , they have about 65m of sea-level locked up in them. They don;t react quickly, we know from the geological record, they have not responded in hte past quicker than 2m per hundred years. Its like a freight train, once you start them melting and even if we have some miracle technology that brings the situation back to a pre-industrial state, they are responding to the climate 100 years before. Those are the sorts of long term committments will happen whethe rwe like it or not. ??? coal, ??? not absorbing, turning it into electricity rather than heating up ? Probably too small . Its like the energy release of burning the fossi;l fuel is 100,000 times less than the energy of a single CO2 molecule in th atmos. The heat released to make a fossil fuel CO2 molecule is 100,000 times less than the energy it captures in the atmos. I think a blackish roof compared to red will not make much difference. On your last slide you said there was no doubt that the temp is going up and humans are causing it. Presumably some of those people call themselves scientists and presumably some of those are sincere in what they believe and doubts. So how would you summarise for the position of someone like that? People used to believe the Earth was flat and they were sincere in that. Before my interest in climate science , my PHd was on geo-chronology, how old rocks are. There was a similar viewpoint htat the Earth was only 6,000 years old , but geo-chronolgy said it was 4.567 billion years old. I guess its a similar situation. They may well believe that the climate has not warmed , but I'd argue that they're view was not science based. As a scientist I'd like to know whether or not what went on 50mya is at all a good proxy for what happened in the last 100 years. So is it a good proxy for burning coal in China , whatever it was that happened 50mya and are you addressing that question? If I was using the climate of the past to say , this is what the climate will be in 100 years. So looking back 50mya it was 12 degrees in China then that is what it will be like in 2 years time, then that would be wrong. 50mya CO2 levels were caused by natural processes , similar levels resulting from burning coal, what about all the steps in between? Thats why we look at ECS, the models run until they are at equilibrium, which is a much closer state to the geological past than the transient we are in now. When you look at the amount of warming that happens in a climate model , on a 100 year timescale , its about 2/3. The warming we see now , as a result of the forcings, is about 2/3 of the full response. In the geological past we are looking at the full repsonse, so we compare the models full response to the past, then we look back at how the models handled the transient , the shorter term repsonse. So not quite the same , we are evaluating how well the models stimulate the ECS , which is best represented by the geological past , becasue there is no transient that is an analogue. If Trump and Cruise are climate change skeptics , do you know of any political statement that in their uncertainty , have put some funding into doing some more research , so they have clarity? Anybody who is not sure , where the truth is, in a hypothetical position , you would then say put some money in there and get some results? With the messages they are coming out with , is the opposite really. They're sure that nothing is happening. I've heard they want to close the climate science section of NASA, put 3 billion dollars that is normally spent on studying Earth climate, into the planet exploring bit of NASA. Is there a falsification route , they could possibly go down, given enough funding? They've tried, particularly in USA. They've had so-called skeptic group , Berkley Earth, revisit . I think a physicist from Berkley, Cal , he had a lot of private funding , to show the temp records were all cooked. That they were bogus, 150 years worth of temp records and they came up with a record that was identical. In the post-truth age it doesn't matter , falsified or not , people will still vote for Trump. ? In 2009 he was all in favour with the Copenhagen climate accord. So he's changed his view since then, for whatever reason. One of the skeptic things seems to be, climate scientists change their minds. Go back to the popular science programmes of 40 to 50 ya, they were talking about , when the next ice age was due, way slower than what we're doing, but comparitively short-term effect compared to ???. Whats changed from a climate science perspective from saying the next thing up is ice-age , to the next thing up is extreme warming.? I used to be a geo-chronologist and the reason I got into climate science . I was sitting with a colleague of mine ,doing my PhD and I was saying in ablut 1990 ,the Earth chucks out loads of CO2 from volcanoes, it was all natural cycle . There is a lot of noise in the climate ststem , and my colleague said, no. Now we can be pretty sure that we're outside the envelope of natural variability. Wheras when you were in the 1950s, it is the same mistake that all thse climate skeptics say, pause in global warming, no , you have to look at the bigger picture. Only 2% of the climate heat component is in the atmos, 98% in the oceans, so long term trends are just different amoiunts the ocans are storing. We cant measure the oceans well and we don't havd records that far. Wen you look at thermometer records on the Earth surface , that is only 2% of the heat of the climate system, so of course it will show ups and downs. So its necessary to look at a scale broader than those changes in atmos heat storage. Jim Hanson ? in 1998 , stood in front of Congress and said We are now 95% sure we are outside the envelope . There was purely the observational record to base that on , then there is climate science that says there should be a relationship. You make an hypothesis, test it, and thats what we're doing. The guys predicting global cooling , probably gave up by then. You have to act on the evidence as it is, and I think we've built a pretty good case. Is there global monitoring of ocean temps now? Yes, its called Argo programme, very expensive , floats that go up and down the oceans, about 13,000 of them. But they only go down to 2,000m which is the majority of the ocean, the mean depth of the oceans is about 3,400m so we are missing some, but it is quite well mixed at that lower part. That is now but pre1990s we did nit have much idea of that, it then depended on what temps ships had taken. Its a multimillion dollar international programme that scientists at the NOC take a strong part. The Uk have some floats but the USA by far has the most. So it could be catastrophic if Trump pullls the funding plug. These big infrastructure programmes, the US always lead on. So water vapour and CO2 in the atmos. A lot of gases are being made and utilised ,for instance in air conditioning systems that are up to 1000 times more influential than CO2. Increasing wealth , leading to more AC use, are we seeing that making an effect on the combined contribution. The volume of these will never be anywhere near the CO2 proportion, but as their effect is so much greater? So like CFCs and othe manmade greenhouse gases. I don'y know . The main issue with CFCs is they react with ozone. The ozone hole is healing . I know for a fact from someone in the automotive indudtry and government, they are developing AC gases that are only 100 times worse than CO2 but they are very explosive. ? I guess the advantage is that its smallfry compared to atmos CO2 . I think its 1000 Petagrams of C in the atmos, the other gases would be minor in comparison. On tundra permafrost melt and methane release , is that going to happen? Its well known positive feedback, receiving a lot of attention. Methane is 26 times more potent GHG than CO2. That permafrost melt is not built into climate models. If there was a big disconnect between the models and the geological record , then we miight first suspect that process, but there is not. Also observations like little methane was released by the Deepwater Horizon spill in Gulf of Mexico. Most of that methane was digested by microbes in the water column and there is venting methane off the shelf of Norway , shallow water 80 to 100m and you can see the bubble plumes but flying a drone over the area, there is no extra methane observed. Its just not escaping from the water. So we think methane hydrates and permafrost methane will not be a big role. Therre is a long term study that lookes at methane in a town in Canada. In summer the wind blows in 1 direction and get a methane recording , but over the years , with the current warming, there has been no change in the methane content. Its probably turning to C and not getting out, it is quite reactive and may just be digested and not being released to the atmos in significant quantities. Are you using boron isotope analysis for age determination or some othe r reason? That gives us the ocean pH, it tells us the pH at which those shells lived at. Then get the age from the stratification? Much as anthropogenic CO2 is dissolving in surface water , making the oceans acidic , the ocean pH tells us the CO2 content of the atmos. The more atmos CO2 , the more acidic the oceans are. There is a whole cottage indudstry on dating of sediment cores. Very simply we know roughly how much they accumulate thru time. There will be regular reversals of the Earth magnetic field , the last one about 800,000ya N becomes S and S becomes N. That imparts magnetism on the segments that we can measure , that gives us tie-in points. So 5m down in a core the magnetism has flipped the other way, so 800,000 y old. Then some cores may have ash layers in them , volcanic ash, and we can date those events. From having a big archive of cores, some of which are very well dated through thses different methods you can use the climate cycles to "tune" your records. If you have a climate cycle in one core, compare it to a well dated core and get the age that way. Also by stratigraphy, appearance and disappearance of certain organisms. We don't use boron for dating purposes as its stable. We do use C isotopes for dating the top 20,000 years. What is it we should be telling Trump ? I think a sensible thing to do would be invest in green energy. Look at the price of solar panels and how much they've dropped. The oil industry has a lot of subsidies and a lot of infrastructure that other energy sources have to compete with. Oil is delivered by road , that infrastructure is already there, its subsidised in many such ways, not just in exploration terms. Its built into our very fabric and we need to help other technologies . I followed the Romsey MP as part of a Royal Society pairing scheme and I was in Parliament when the govt cut the subsidy for solar panels from 95 % to 2 or 3% or whatever and it was devastating for the industry. And devastating for green energy in general as why would any industry come ot a country that does that. Build you up and then cuts you down. You could be really rich ,investing in green energy, look at China. The west only thinks long term , the developing countries copy the economic history of the west, such as everyone driving a car? A big chunk of the Paris agreement was how much money should the developed world pay the developing world to avoid making the same mistakes that we did. That is something that Trump wants to get out of. B25

Monday 13 Feb 2017, Dr Marc Molinari, Solent Uni : Non-destructive testing of railway wheel sets 16 people, 1.5 hours This came from a study part funded by the RSSB , Rail Safety Standards Board. Funded by the rail industry and the govt, to ensure we can enjoy rail rides. A map of the Swansea area and a place called Oystermouth? and the Mumbles. In 1804 a huge need for transporting coal, iron ore , limestone from the sources onto canals and Swansea dealt with all that and shipped northwards and eastwards. In the Mumbles they did not have a road to Swansea but they had all those materials. The first railway was established to transport that. A carriage on wheels on rails pulled by horse, the oldest railway. Q: no it wasn't Tyneforth ? had railways about 100 years before that, horsedrawn on wooden rails . You can still see the Causeway Arch ? railway bridge, the oldest railway viaduct in the world. For transporting coal from the mines down to the Tyne . The Oystermouth Tramrail ? Company built this one, length about 5.5 miles. A few years after being built, the company asked for permission to transport passengers as well, because there was no road. The govt approved it. 48 years later the rails were changed from 1290 mm to 1.4m wide gauge. Ultimately 1.4m became the standard for most of the other railways. In 1877 steampower replaced horses. Just before 1904 they tried to use a battery powered , accumulator car. Batteries back then , very rudimentary, jars with liquid and metal. Very unsuccessful, the trams would not move . 100 years later we now have battery powered cars. 1928 electrification came about . Eventually a road was built, revenue went down . Things developed nationally, the rail network of 1963. Then Beeching , loss of the small goods transport branch lines. If a railway was not yused it was removed, removed some that were used. The density of the network changed a lot. The current Network Rail network high density in SE and the Liverpool area and Manchester , Glasgow and Edinburgh. Today there is 16,000km of rail track. Also lots of private tracks , tourist trains etc. Looking after the main etwork takes a lot of time testing it. Trains that check the rail quality , driving along the tracks every day. Also additional transit systems like london Underground and tram networks. In 1994 UK was connected to Europe by the Channel tunnel. Brittain has one of the densest networks in the world, just looking at Europe , Britain has 20% of all Europe rail journeys. That is about 65 billion km passenger journeys per year, a massive figure. Railways today, a comfortable smooth ride, lots of us use it. It is often overcrowded in terms of people and timetables. Few slots to put additional trains on the network. Its fairly safe , few accidents considering the billions of miles travelled , probably the safest form of transport. So maintainence and looking at the wheels. A maintainence shed with up to 12 trains at the same time being serviced. It includes regular servicing, need emptying out of toilets, oil checks. Contacts between rail and wheel , the wheels , brakes , the under carriage for missing parts etc. A lot of activity . Considering just the wheels. Every carriage has 2 boggies each boggie has 2 wheel sets , 4 wheels , so 8 wheels per carriage. A low estimate of weight 8 tons per carriage for modern light weight ones, Siemens 700 series . So 1 ton per wheel bu tthe contact area is about the size of a 2p coin. The whole weight of a carriage is on a contact area about the size of a DVD. 8 tons is on the low side, much more for goods trains. A commuter carriage overloaded with 100 people may double that weight. The materials of carriages is very light nowadays. There are heavy things like air conditioning in there . Q: Take a classic 47 series diesel locomotive that was 114 tons on 2 boggies , so contact stress on a loco was very high. You could make diamonds at that pressure. These contact poins are 1 of the safety critical points because if something goes wrong there, it could derail the train. With ongoing impact on public perception of rail travel and the bottom line of finance of rail operation. The wheels are made of steel it is the geometry that needs to be looked at, is any of the profile lost. It is a special profile, they are constructed to. The profile changes with use , wear and tear. Ultimately they wear to the point the geometry goes below the safe limit or it needs to be reprofiled. There could be surface defects and also subsurface defects , inside the metal. It could be delamination of steel inside, cracks developing from the inside to the outside. Also the axle needs inspection. Typically the wheels are pressed , high pressure and hot , pressed on to the axle. 1 wheel is about 200Kg, 2 wheels 400k plus axle similar to 600kg. With brake discs on it can easily be 1 ton . Its not just 1 type of wheel, some have brake pads inside the wheel like a car , also ones with track brakes that sit on the outside . Track brakes used to be more common , brake blocks pulled against the wheel rim. Siemens trains are going back to tread brakes because they clean the wheel while its turning and braking so a nice shiney surface. Audience: mention of the 1:20 profile on the rim. Steering round corners and stopping of hunting. In the old days there was no planned stiffness for the bogie. So you had a wheel set that ran in horn guides. As it went round a corner , there was nothing to control it until it hit the horn box and then it was infinitely stiff. When it wears, you go to a steeper profile , so when you go round a curve , it wants to steer more and oversteers, corrects, oversteers and knocking noise. Requires keeping the rim profile to 1 in 20, which means lathe turning the wheels quite frequently , 100,000 miles, to keep the stability in the vehicle. Later on they brought in planned stiffness , many components are now rubberised , so you can go farther down the steeper profile . The profile nowadays is called a worn profile, P8, P1 was the original 1 in 20. P8 means you don't turn off so much when reprofiling the wheel. With lots of rubber, vertical , lateral and yaw dampers, you manage the frequencies that a wheelset could pick up on going around corners. A lot of science going into all that, the tread profile and the angle of the flange. Get the wrong angle on a flange , then you climb and derail. You can get roll-up on a flange, a toe radius buildup which can then pick-up on points and derail there. Thats what we found when we did our research. Tight margins and limits on those factors. NDT - means you don't damage something. If want to inspect internal material, you could break it and then know what was inside. ND means a method that does not require breaking anything. You can determine internal crystal structure without any damage. An early example was the wheel-tapper, an engineer with a long handle hammer , tapped the wheel, listened to it and based on the ringing sound he cou,ld hear, healthy wheel or a crack somewhere. They were very skilled people with very good hearing. Frequency of inspection, Southern and GTR they have mileage intervals, but there are also timeage intervals. After 60,000 to 80,000 miles they inspect bogie and train body . Every 32 to 36,000 miles a wheelset examination, measurement and gauge. Compared to cars, longer interval. In comparison French TGV trains , a daily automated railside inspection of the underside and pantographs. Every 5 to 6 days or 4,500 km there is a ? inspection. Every 18 days, traction motors, boggies are maintained at the depot. A huge turnover required, building into the overall running costs of trains. Channel tunnel trains also get weekly inspections or about 5000km. Manual measurements are common . The geometry of the rim, a slide-rule arrangement that tells you the flange height and thickness, if they get thin, the train could derail at points. The Swallow gauge? , for looking at the toe radius on the flange. A magnetic one, clamps on for measuring the tread height at the centre. Once they go below a certain value, with tight tolerances in mm, the wheel must be reprofiled on a massive lathe. Train drives into place, the rail is replaced by the lathe that automatically removes , the norm is 1.5cm of steel removed. Both sides at the same time. If one side damaged then both sides have to be turned to match. The other pair of wheels can be different dimensions but not co-axial wheels. The lathe rotates the wheels, the carriage goes onto the lathe , the track drops away,to allow the wheels to be rotated and machined down. It takes about 45 mins to an hour. This metal removal can be repeated for 4 or 5 times . They come with about 8cm that can be removed , at the end there must be aminimum of 1.5 cm remaining. Below that and the whole set of 2 wheels and axle is remelted down for new wheels. There is a hole in the wheel disc, inject oil at very high pressure and the wheel comes of the axle, replacement wheels are pressed on to the axle. GTR they don't do that , they send them off for recycling. On the same axle , both wheels must bw the same diameter or they would always be going round corners on straight rails. Q: Is the train slower after taking a cm or 2 off the diameters? T%he wheels turn a bit faster and the motors can cope with that. Also the boggie dips down a bit but the carriage remains horizontal. Q:Its a different motor driving each axle? Not all of them, it varies, the 165 class, all wheels are driven , using a hydraulic transmission , then all wheel diameters must be very much the same . For HST trains all 8 cars are undriven, each power car has 8 driven wheels. Where you have a driven set, you need to have the friction between driven wheel and the rail. So what if there is ice on it, or really smooth, say brand new wheel and brand new rail, there is wheel spin. for that situation there is a sand dispenser on the driven wheels. A tube near it, the driver presses abutton and sand is squirted out. On the other hand, this also damages the wheels and track, so always a balance, get you get the train started . Similarly with leaves on the line. Leaves sometimes block the wheels , gets into track brakes. If the wheel does not spin for some reason , it slides, once it slides , it gets a flat, rubbing steel over steel. There are a ot of condition monitoring systems in use. How do you measure the profile of a wheel. Do it manually when in a depot but can you do this while its in service. A number of companies have come up with systems that can measure trains while running past. Using a laser projection and a camera, up to about 17mph currently. So a log is taken for that train at that time , has an issue and should be removed for further action. Hand held devices that let engineers measure the profile . Another system next to the rails with 8 cameras . These days you can have accelerometrs in hand-held devices , as in mobile phones, and you know the position and attitude of the device when recording the laser scan lines , while moving the device. The olympus system , in a depot , the train is still . The system clamps onto the rail , lifts up the wheel hydraulically . The wheel is turned by the device and the internals of the wheel is measured by ultrasound, after squirting on water as a contact material. Measures the reflection and absortion . Othere systems sit in the rail itself. So a matter of mounting sensors in the rail to measure the wheel via the firm contact point between rail and wheel. An electromagnetic field that measures surface and for a thin layer also interior structure. This has been done experimentally but whether the right material can be found for the sensor holding rail replacement section, to hold the whole weight of the train and small contact surface , without getting damaged in multiple useage. What are we looking for. Rolling Contact Fatigue, using steel on steel and rolling one of them , you get slight misshaping of the steel. That results in small cracks across the tread, less than 1mm in width, up to about 2cm long. If that is detected then the train must come out of service and be reprofiled. Wheel-flats, often happens ith leaves on the line. To detect this, they use a wheel impact load detector. A piece of rail, the train goes over, and if the wheel does not turn smoothly, a clatter noise, and the detector picks up those impact noises. If it gets really bad you end up with a red-hot glowing piece of wheel. Another defect is hollow tread, the 1 in 20 slope of the tread , hollowing means the train has to come ou tof service. A difference of 2mm from true profile, the train has to be taken out. Cracks from failed material or from heating, very fine cracks. Fine surface cracks develop into bigger cracks, if detected , the train must not even be moved, must be skated into a depot. Flaking happens a lot, cause by corrossion and also by sand use. Must be reprofiled if detected. Flange defects, toe-radius build-up , where metal is pushed up against the flange and builds up. Often goes along with thinning of the flange. With thinning, at points the wheel does not slot in easily , dangerous situation. Q: When you say pushed is that almost a liquid steel state? Over time , working metal , by pushing hard enough , then it will flow. NDT Ultrasound , sending an acoustic wave into material. Magnetic particle inspection where a liquid with dissolved ferro-magnetic particles within. Apply it to the surface and apply magnetism or electric current, and there is a small crack, the magnetic field is not continuous in the material , the ferro-material will accumulate in the crack and you see a black line, contrast between no crack and crack area. Its messy, has t o be cleaned up after. Ultrasound, used wit a gel for coupling the sound into the material. Can be used without contact gel using an electromagnetic acoustic transducer. Eddy currents, electromagnetic currents in the surface. Radio frequency impedance - smooth material givves no reaction to applied field , but little cracks can act as little antennas giving secondary fields at different frequencies . Then the interpretion of what these measurements mean in terms of damage or in terms of geometry. Trying to reconstruct how and where these anomalies are coming from. Do it properly and you can image 3D properties within the material , much like baby-scaner images. Standard ultrasound uses a coupling gel, messy. Electromagnetic acoustic transducer uses a magnet staic or electromagnet and a coil underneath. By pulsing the coil , induces eddyy-currents in the material, that then creates a force on electrodes , Lorentz force, causing an ultrasonic wave inside the material as standard. Does not require direct contact and gel. Disadvantage is the signal to noise ratio is difficult to handle due to the small signals to detect. We are looking at ways of automating condition and measurement systems. Engineers working in this industry, having built up many years of experience , are getting rarer with retirement. not enough engineers coming through to make up that loss . At the uni , whenever we have graduates coming out they immediately get jobs. There are about 80,000 graduates needed annually that are missing in this and other similar industries just for the UK. Some say up to 130, 000 engineers short. If you are an engineering student these days, you can pick where you want to go after your studies. Cost saving is always a factor , turning round trains quickly. You want to make maintainance intervals as long as possible without losing QC on the wheels. This checking process is time concerning , with a number of people going around individual wheels. If that could be automated , 4 to 6 trains a day, the annual savings would be about 75,000 GBP. Consistent measuring accuracy is a factor, doing this manually, shown up in a number of reports, the reading repeatability is very low, different measurements on different days for the same person , and differnt people measuring differently. An automatic checking process should be mor e objective taking out personal judgement on where a gauge is fitted etc. If you can capture all that data, the more data you have and analyse, defects and their detection in future. Loads of different ways of perhaps automating this. Roll along a 90cm wheel , you need a length of about 3m. If in a depot the speed is max 5mph , about 3m/s. So 1 second to record the detail of the whole circumference. We CAD analysed different sensor systems and attachement arrangements . Many depots have inspection pits under the track. With an inspection pit its easier to attach or install something that is automated to rise up and do the inspection. We'ce done 3D acoustic analysis of how waves travel tthrough the material, reconstructing o nthe inside what we could see from the outside, extracting the interior picture is quite a challenging process. A curious early experiment with a bike wheel. Capturing a rotating bike wheel , at speed , stitching the pics together for 1 long image. 80,400 pixels x 1920 pixels. With captured data its always possible to return to it. Our latest staff member Baxter , a robot type used in industry in an open-sourc erobotic environment and OS. This one has 6 axes on each arm, with grabbers and sensors and can go anywhere. There is a huge amount of software out there , all written in Python. To me what was amazing with this project was to see the scale of engineering that goes with railway systems, the things you don't see as a passenger. The quality of just maintaining wheels is amazing . Q&A Is it just the UK where goods trains and passenger trains are entirely separate, or is this universal, why not clip a goods wagon or 2 on the back of a passenger train? Passenger trains have to observe a very strict time scheduling, goods delayed for an hour is no great problem. Freight trains are heavier and on some lines have speed restrictions . I'm surprised we are still using 200 yearold technology, same gauge, steel wheels on steel rails. Any new wheel technology around? Standards have changed. Over the last few years the grade of steel has changed, the wheel profile has changed, P8 now, previously P1 and P9. The grading at manufacturing of the steel is very closely defined now. All wheels ,now, after manufacture are ultrasonoic tested before going into service. I was thinking rubber tyred wheels perhaps? There was a big accident in south Germany , due to failure of a rubber wheel. Base steel wheel, rubber on the outside and then over that a steel tyre. Used for damping the vibrations from travelling fast , one of those rubber sections perished or something and the steel rim came off . Steel wheels ar ecrude but reliable, you can take material off them and still a solid steel wheel. There is also steel wheels with a steel tyre on them, so the tyre can be reprofiled or replaced when worn or recut too low. There is only a small number of wheelset types allowed, its mainly down to the grain size of the steel.. I beleive for scheduled mainainence of helicopters, there is permanent recording of noise in servise for any long term changes in vibration and noises, is there equivalent for monitoring passenger coaches? prehaps there is for the engines and traction systems.? Recent technology developed at Chillworth Scienc ePark by Perpetuum , vibration monitor. It harvests vibration for energy. It sits on the axle , vibrates while the train goes along, like the watches that are self-powered by arm movement. That energy is stored and at the same time it monitors the frequency of vibration as tthe wheels go along. If you get a continuous additional frequency, rathe rthan a temporary one from going over sand or something, it will detect that and inform the train information system that then records that. Yhat data from the train can then go live , via mobile phone system , or wifi, to the maintainance engineers for assessment of leave the train in service or take it out. On cross-rail all those eventual trains have something like 8 or 10,000 monitors on them, continuously monotored for all sorts of things. Like aero engines are these days. ? Then loads of data to process through If there is a call for doing this, you either get cost savings . For aero engines these days you don't buy an aero engine , you buy the power and you only lease the engine when it has power in it. If it breaks down , you stop paying. The manufacturer wants to make sure that engine keeps running all the time. Same with cross-railtrains, if they break down, you stop renting, until they run again. It makes sense to have sensing systems on the train rather than the track? The track is mainained by Network rail, the trains by train operating companies , but they are owned by rolling-stock companies. So 3 different companies involved and a lot of discussion going on now about who does what. The yellow trains that go round measure the tracks, the whole UK track is monitored by them continuously, recording the state of the tracks. Some such trains can reprofile a bit of line or its necessary to cut a piece out and thermite weld a new rail in place and then polish the tops. Any advantage in replacing the chassis with carbon-fibre for reducing weight? The Siemens 700 series , they are very light, a lot of aluminium but also a lot of plastic, could be carbon-fibre. But its expensive if large areas of the fibre. Is there an addiction to an old style of engineering? Steel is better in a fire situation. Thousands and thousands of miles of steel. But everything is steel, the footbridges look strong enough to run a train over it? We do have new materials . Perhaps othe rcountries can create new versions of traditional structures, easier. Go to Japan and bridges are made of bamboo and othe r different materials, designs that look good and last. If something is established, we know it works, for using new materials there is often extra costs for changing manufacturing methods. Vesper IoW wind turbine makers of very large strong plastic structures probably could say we could make such a bridge, but they'd have to change their processing methods. Sometimes people don't want to change. eg rockets stayed much the same and then along came SpaceX , mor eefficient engine, better costs as re-useable. Everyone stands back and says why didn't we think of that. Trains seem to be stuck i na rut? Tesla similarly. Home batteries for solar cells to store surplus energy. The sand business , is that used a lot or continuously? Its just used to get moving, blowing sand under. The driver i think gets a flashing light if they loose traction and then they blow sand out. Once you are rolling you don't have the friction . For cars there are all sorts of fancy traction controls, if a wheel starts to spin.? Used on trains, not that I'm aware of. There is refgenerative braking a lot these days. Put the brakes on effectively puts dynamos in the system and generate electricity on braking. That goes back to the third rail or pantograph or stored on board? On board I think. They have huge batteries about the size of this pool table. If they loose power , at crossings or going through stations where there is not necessarily a third rail. Pantograph contact is not that continuous either.

Monday 13 Mar 2017, Professor James Anderson, Soton Uni [third return visit ] : The Mathematics of Fractals 33 people, 1.5hr There is the old saying that one should not drink and derive. I'll try to get across what we mean as mathematicians , what is a fractal (F). I'm not going too deep into the maths. I'll work thru 2 basic definitions . A mathematical step that demonstrates a repeating pattern at every step and every scale. In a loose sense I have some thing , if I take a small piece , focus down on that small piece and given infinite resolution, I blow it up, the result should look like what I started with. The simplest thing like that is a line. A straight line on a piece of paper, take a tiny piece of the line, expand it, still something like a line, repeat and looks like a line. Thats fine but we don't want to think of a line as a F object. A line is too simple an object to think of as a F. So we have to be careful with such definitions as looks similar on any scale. Now to engage your imaginations, the Sierpinski Triangle. A big orange triangle thing, cutting out a middle point, the mid point of each of the 3 sides. Drawn a triangle between those 3 and cut it out. That left me with 3 big orange triangles. For each of those I do exactly the same. Then every time i see an orange triangle I take the middle 3 points of its 3 sides, join them together and get a little triangle, colour it white , same as cutting out. Just keep going. What F can make a bit headache inducing at times , is what happens at the "we just keep going part". The result is an orange regular spider-web like thing. That is an example of a F. Blow up any piece of it and it looks exacly likke the original. Fs look the same on any scale, no matter how tiny a piece you get , on blowing up , you see much as the original looked. This is a very regular sort of construction. Simple is to take a piece of line, remove the middle third of it , and I'm left with 2 pieces of line, each of those remove the middle and I have 4 pieces of line that are much shorter. Keep doing that , over and over again, and I get what looks like dust, scattered on the line, known as a Cantor Set, the middle thirds Cantor Set as I'm removing the middle third of each. Georg Cantor was a great M of the late 19C . He came up with things that drove him insane and rendered himself an outcast in the M community , until we realised he was doing everything that we fundamentally wanted to do. He wanted to get a handle on some of these things, how we get a handle on them. Another example, doing the same thing repeatedly at smaller and smaller scales , never stopping. Start with a triangle, not caring about the inside, just caring about the boundary edge of the triangle. Repeating something over and over again to get something thats fractal. Instead of removing the middle third, I replace it with 2 sides of a small equilateral triangle, replacing the flat middle of a line, with something pointy. I now have 4 pieces of line, each shorter than the original line, but every time I have a line, I can do the same thing. Just keep doing the same construction. At every step, I get something that looks more and more complicated, fairly quickly. If I can do this infinitely many times, take a small piece of it and blow it up, I will see exactly the same. What we call the Koch snowflake is what we end up with many many times. In areal sense its impossible to draw . This is where maths separates from the real world. When we do something infinitely many times and we get something in the limit, which we may , with any feilty, be able to draw in the actual universe. The actual universe is fundamentally lumpy , its quantised, its not a continuous thiung. We Ms would love everything to be continuous. We can do that same construction all over the place. So taking the surface of a globe, not the whole globe, I don't care about the inside, just the surface. I remove a bunch of big discs, a round bit, remove smaller discs from what remains, and keep removing smaller and smaller discs. Its harder to see the regularity compared to my earlier examples. Its harder to understand the rule that we're using, to remove things. But not as simple as just removing a middle triangle. Here we don't have the seeming regularity as with triangles. The field in which i do my research is a field where we generate fractal objects. To try to get a handle on this general way of doing things. The basics is ,things that look the same on every scale. One thing Ms have to do in our work , we have to define what we mean by things in a fairly precise way. How do we define "the same on every scale" or similar on every scale to allow a bit of fuzz. There is some formalism , some structure , to what we mean by same on every scale. I won't tell you what it is , as its kind of complicated , I just want you to believe me when I say , there is a way of being very formal , in a very precise M sort of way. Iterated function systems is the technical phrase. We generate things that are properly fractal objects and we get some nifty pics. In this image, the boundary of everything I can see is just a circle. You're generating an object that is very real a fractal but every boundary is justr a circle. Is that jagged enough to be a fractal thing? because its nowhere near as jagged as the Koch snowflake when it gets done, incredibly jagged thing. Appearance depends on what you want to mean by fractal. For me there is a very precise definition, not that everything looks the same on every scale . The classical Mandelbrot set object. A much less regular object than we've seen so far. So the first thing we can ask is does that thing satisfy the definition of looking the same on every scale. Take small pieces of the MS, take small pieces and blow them up , it does not look exactly the same but very much like the whole thing. How you build a MS is an interesting juxtoposition of complicated indices?. What the colours are , referring to speeds of how far points are moving. Go to you-tube and you can see where someone has taken a point and just zooms in. Zoom in at a constant speed and you see things that look almost like the MS , appearing, no matter how deep you go. So the same basic shape, keeps repeating, and you can find it on the smallest scale that you want. An unusually shaped object but you can find copies of it on very small scales and work it back to the definition of things looking the same on every scale. You need a loose definition of sameness , to make that. I don't actually like the definition of things looking the same on every scale. Go back to the Mathematics of Nature by Benoir Mandelbrot . It did appear in a paper by Brooks and Mckelski a few years earlier but they only had a crude line printer and so you could not get an accurate picture ofall the complexities, they had the M underpinnings there, but Mandelbrot was a better expositor of M. Whole positive numbers and 0, ignoring negatives for the time. The numbers with which we count apples etc. A line is a 1 dimensional thing. A niaive way of thinking of dimensions is as the degrees of freedom, how many different directions can we move. On a line its back and forth one way. A flat table top I can move in 2 directions L,R, forward back. If I start at one point I can get to any other point purely in those 4 terms. For a room I can pick a point, then go foreward or back , your foreward and back is diiferent wrt you , L or R and again your L and R is different, upand down which is the same for you, but it is a pub and it is early. For a room I need 3 directions . We can think of time as the 4th dimension and colour being 5th dimension, all sorts of notions of dimension that we have. What does it mean for a thing to have a dimension that is not actually a whole number, a different sort of dimension. Mandelbrots book came out in about 1982 and thats where we come across non whole number dimensions. He starts with a question - What is the length of the coastline of Britain. It depends on how you measure. Take a crude map and takle a piece of string along the coast, measure the length of the string, account for the scale of the map and get a number. If I walked along the beach , trailing a piece of string behind me , do I go round every small rock, do tide pools count, high water, low water. i get a curve that looks very jagged and as I refine the scale on which I'm operating , the length of the coast goes up. The finer the scale, the longer the coast , as I start working around individual grains of sand, even working round things that are too smal lto be seen but still require going around. Mandelbrot said that sometimes when I'm trying to measur ea thing , using a whole number dimension is'nt going to work. What is the zero dimension of something. I could count the number of points. 5 apples in my kitchen, I could count 5. I could take the length of someting, so 1D, using a length of string , perhaps used repeatedly. Take something flat like a pool table I've got area and now how to figure out the area of a square and I can figure out how many squares to fill my object even if sometimes I only need parts of squares. For 3D Ihave volume and can start with a cube and how many do I need to fill up the space, and sometimes I'll need parts of cubes if close to an edge. But what does half dimension look like , or log4/log3. Go back to the Koch snowflake . An equation. Step 0, the number in front of the colon is the step we are on. Its my starting point, an equilateral triangle. I make the assumption, it does not matter, I make the length of each side 1. Adding up the 3 sides I just get 3. Step1 , each side of the triangle, I've taken away 1/3 of the length and added 2/3, now 4 pieces each of length 1/3, done 3 times as 3 sides. We now iterate, do it again. At the next step I'd have 16 pieces on each side , breaking into 4. For each as I start with a length 1/3 each has length 1/9 , replacing the mid with 2 others. 1 piece becomes 4 pieces but each is 1/3 the original piece , so I get 16 pieces each of length 1/9 and for all 3 sides of the original triangle. We just keep going again and again. Then at step n , for each of the 3 sides I have 4^n pieces , whatever n happens to be, n is the number of steps. Each of those has length 1/(3^n) because I keep breaking things into thirds. So what is the length of the Kock snowflake, using our usual measur eof lengh a ruler or piece of string. I started with something of length 3, after n steps I have something 3x (4/3)^n. The problem is that (4/3)^n , as n gets bigger , it gets bigger and it gets bigger quickly . The Koch snoflake is not built until I've gone through all possible ends and the length in the end is infinitr. So if I used a normal measur eof length to figure out the length of the Koch snowflake , I'd have something with infinite length , that I could still draw on a piece of paper. My normal notion of length is not the way to measure the size of the Koch snowflake. It looks as though it should be a 1D thing, building it by starting with a line, but as I build it, cutting and replacing with more spikey pieces, in the end, I get something for which length does not really make sense. So 1D is not how I should measure the length of this object, I should not use the normal notion of length. But I can't really use area because essentially all I've done is built a curve, as I'm ignoring what's inside. I'm just looking at the edge, so should not have anything 2D about it. So thinking about the size of this thing, its not 1 and its not 2. So it has to be something in between. Thats where we start to look at dimension not being an integre. What is the appropriate scale to measure size and that is where fractal, dimension that is not an integre comes from. There are lots of ways that we as Ms have come up with to measure dimension. There is topological dimension, Biscani? dimension , Hausdorff dimension named after Felix Hausdorff and Minkowski Dimension , similarity dimension and lots of others. For a reasonable object like a line , they are all the same, I would get 1. For a nice object they are all the same. You could define a nice object ,if all these different notions of dimension are the same. Thats what Ms like to do all the time, flip things on their heads. Q: Is nice a proper mathematical term? It is now. I just said that to the internet so it has to be true, right? The most commonly used notion of dimension is Hausdorff Dimension. Its hard to create the Hausdorff dimension of a thing because you have to do a lot of stuff. I pick a number D for dimension , a guess as to what the dimension might be. Thinking of it as a variable, no assigned value yet. I cover the object that I'm dealing with, The Koch snowflake or the Mandelbrot set or whetever. I cover it with discs, round things with a centre and a radius. I might need infinitely many of them. For each way of covering my object with round things I calculate a number. I take all the radii of the discs, I raise each to the D power and I add them up. I'm burying something here, how do we know that adding up infinitely many things , gives us a finite number. I won't worry about that, sometimes it does , sometimes it doesn't. If it doesn't then D is a bad choice. I cover my object with all these discs, so I can no longer see the thing any more , for all the discs. I calculate this number. So covering my object with plates, I get a number . Now I take a different way of covering it with plates, perhaps smaller or bigger plates, or even microscopic plates. Every way , and the every makes it hard, I cover with plates, I get a number. Then I take the smallest possible number , out of all the possible ways . There will be ways of covering with a finite number of plates, there will be ways of covering with infinitely many plates. If you pick a nice sort of object, which not all of them are, every way of covering with plates will have a finite collection of plates, still covers, but not all the time. For exceptionally nice objects (again we'll call that a mathematical term) you get a finite sum. Doesn't work for a line but does for Koch snowflake and the Mandelbrot set (MS), the Sierpinski curve, it works with things that you can actually draw. When I look at this quantity and see how it changes as I change D weird things happen. When D is small the quantity is infinite, when D is big, the result is zero. There is a single point in between where it jumps and where it jumps is the thing we call the Hausdorff Dimension (HD). I'd be comfortable teaching this area to second or third year undergrads. I buried a vast amount of material here, what I as a M would find interesting but over the years I've realised not everyone finds it as interesting as i do. There is a way, a formula foe calculating the Hausdorff. For the Koch snowflake its HD is log4/log3 which is somewhere between 1 and 2. For the MS and just its edge , we think its HD is 2 but we don't actually know. As far as I'm aware its still unsolved. A structure that looks a bit like wrought-iron work or a Paisley design. The sort of object I work with on a daily or weekly basis. Its a HD 1.3 dimensional thing. Its built out of a curve , it has the property , blow up any piece that looks flat at this scale I would see the same thing . How to build a MS. Firstly involves complex numbers. We have our ordinary numbers, we know how to add them , multiply them, the distributive laws make sense, the things we do with brackets all that stuff. Complex numbers are just a bigger set of numbers, expanding my horizon. Now I'm doing as we did with numbers but as points in the plane. Instead of being numbers that I'm adding or multiplying points in the plane. How can I tell where I am in the plane, a 2D thing. I pick a point I declare to be my origin , then A is my left/rightness and B is up/downes. That tells me where I am in the plane once i set down my co-ordinates. There is a way, I won't tell you how, I'm just doing what is called complex arithmatic, here as just simply points in the plane. i take a point C in the plane, then I do a complicated process, it starts by taking the function f(Z) and send it to Z*Z + C C is fixed for the moment. I start with 0, get 0^2 + C = C. I take that C and stick it back in, so f(C) is C^2 + C Take that and plug it back in so I get (C^2+C)^2 + C just keep going and 1 of 2 things will happen , either I will stay not too far from 0 or I will shoot off to infinity. If i stay close to 0 , I colour C black, if i shoot off then I colour C white. I do this for every single point in the plane , this is the MS. The MS is what you get when I do this operation and stay close to 0. The rule to generate the MS is very simple, shoot off or stay close to homwe. If I stay close tohome I'm in the MS, if i shoot off , I'm not. The interesting thing is, this is a very simple rule but gives an incredibly complicated result. Because the jaggedness of the boundary of the MS is saying I can have 2 points however close together , which bwhave differently. No matter how close a point is, it does not tell me what will happen to that close-by point . Thats what makes things fractal, I don't have that sort of control. Going to you-tube and where they do the zooming, its that same calculation at greater and greater resolution, mor eand more digits. But the fact you keep seeing the same picture, the MS however smal la scale, means that this very simple rule is giving you incredibly complicated output. This sort of thing bedevils us all the time. There ar elots of things where what we start off with, may or may not be complicated, and we have to decide yes or no a tthe end. What we get at unis a lot is degree classifications. A bunch of things they've done in second year , a bunch in the third year and we have to decide the classification for their degree. However wel lyou try and set the rule you'll always find yourself in a situation where students have incredibly close results and wwill get a different classification. Its not a problem with the rules you're setting , its the natur eof the fact you're trying to take a lot of complicated things and divide them into 2 buckets or 3 buckets. I had a project, that regretably I never finished. Someone in the law department, how the law deals with things like murder or manslaughter, people killing people , how fractal such things can be in some sense. How can you make sense of complicated interactions that we have where this thing or that thing is the outcome, guilty or innocent. And what the boundary is between 1 side and the other. Often fractal properties will develop. One of the things I've learnt from fractals is you get complicated situations arising from simple rules , where it doesn't matter in the end how you try to separate where 2 things are close together , you should get the same answer, because you will never be able to do that. There will always a situation, no matter how you set the rule, you'll be able to find close situations that get very different answeres. Because that is part of the nature of fractalness, very close things having very differentr outcomes or behaviours. Q&A Is lightning fractal? Many phenomena have fractal aspects. Look at Times series of stock market prices, very jagged curves, those are fractally things. People have tried to say , is the fact they are fractal , does that give us a way of handling them and trying to predict the future of stockmarkets. The answer is no, but people have lost a lot of money trying to do that. If you look at how lightning is formed, of electricity going, splitting , going , splitting. Its about as fractal in the real world as you could possibly get. If a lightning bolt hiys the right sort of ground, it will fuse the minerals in the ground and it will carry on the same root-like structure as in the air. The resulting mineral is called fulgarite, hollow root/branch structure, underground , maybe extending 10 metres or more. Booles maths ended up being quite useful. Can fractal analysis be useful in any discipline? Someone called Eudice Shramm? did a lot of work on fractal type things. He was working for Microsoft, he was trying to understand randomness. There are things we'd love to do by generating random numbers. Usually we use a Pseudo Random Number Generator, using properties of integre arithmatic on a very long scale to generate have properties of being random ,but run it long enough , you end up in cycles. He did a lot of analysis on fractal objects , the random walks, trying to model the stock market. You have to understand fractals to make sense of things, to understand what random actuallly means. If you could properly create randomness, there are huge applications for doing random things. Ways of evaluating things, by collecting random points and seeing what happens over those points. Its not as strong a connection as in other maths areas. So its just pure research? At the moment I think they are. I have to say this as a pure M, its unlikely anything I've done research-wise will turn out to be useful. But i'm not saying it won't ever happ[en, it might take a while. We're laying the groundwork for others to use. At the time differential geometry Bernard Riman mid 1800s , he was just doing it as a thing . It is the basis for Einstein's Theory of Relativity, about which Rieman would never have guessed. Tomorrow someone may use fractal things to do something eminently practical. When someone jumps out of a plane with a parachute over the sea, they cannot apparently tell how high they are as the waves look the same from all heights? Does that mean that sea-waves are fractal? I suspect they do have fractal aspects . Big waves have little waves on top of them and smaller waves on top of those. I guess the ocean surface at any frozen moment of time is a little bit more than 2D. Do you think it might be possible to predict wha tthe sea is doing, using fractal maths? If you're trying to picture what is happening at any moment, then probably not. As a sort of time average, on a larger scale, thrn more predictable generally. Trying to follow an individual particle thriugh things , you can't figure out what its doing, but look at all of them, at once, then you can make reasonable predictions. Go down that track and you will start finding submarines at depth? I was reading over the weekend about the possibility of using quanum theory , to detect submarines by looking at incredibly small variations of gravity, caused by the fact the sub is not made of water. Both fish and subs are neutrally bouyant but Fish contain more water in comparison to a sub. So the MS might have a dimension of 2, does that mean its not a fractal? It depends on how you define fractal. This wiggly object image is just the boundary of the MS. Boundaries of things in the plain you expect to have dimension 1, if it has dimension 2 , then we'd make an exception for it. I was wondering about its application to biology, neuron growth or arteries ? They start at some point, decide whats around them and make a decision on that.? Possibly, i've never thought about it. I suspect someone has considered that, but I've not encountered it in the literature. I can't read as widely as I'd like to , as time is finite. Is time a fractal? You go into Einstein's theory of relativity . You touch a hot stove and a second feels like a minute, etc. I don't know. Can it be used on the expansion of galaxies in the universe? One of the basic questions, as I understand it , in physics, that they've not yet resolved. What Einstein called the Theory of Everything, how the M we use to understand the very small , QM and the M we use to understand the very large, general and special relativity. What happens in the middle, can they be brought together. I don't know if galaxy expansion use this sort of thing. Some collegues model what we think is going on in Neutron Stars in terms of magneto-hydrodynamics, magnetism and fluid flow , are coming together. Its entirely possible they are getting fractal effects there, because they are trying to model what it sounds like when 2 black holes run into each other. To predict the wave signals we would see from gravit ywave detection. I'm interested in the fractals that are evidently self-similar on varying scales and the ones that are almost, like the MS where parts become a bit squished, they are not quite the same. THe MS looks more interesting because it is fundamentally differing. ? For me it goes back to how they are constructed. Things like the Sierpinski Curve are constructed regularly, you're imposing a regularity at the beginning, that gives you this object. for the 1D equivalent of the Sierpinski curve, you'd take a line, remove the middle third, for the remaining bits remove the midle third , you get the Cantor set, the set of dust. The fact I'm removing thirds is irrelevant, I could remove a random section out of each interval , at every stagr, I'd still get a fractal object but I'd have no idea what its actual structure looked like , if 1/10 out of here, 7/10 out of there , doing random things, as I went down. A different fine structure just by having a non regular construction. For the regular objects we casn calculate things about them. As we know the HD of the Koch is log4/log3, comes from the structure being so regular. With the snowflake I could say at every point I see a line , I just take a random set in the middle pf the line, I'd still get a fractal object but I'd not understand it, nearly as well. The MS is the sort where we have a regular rule but don't understand it as nearly as well. Fo rme the beauty of the MS is its coming from the initial rule being simple , but we don't know what its doing at each individual point, as opposed to the very regular initially constructed objects. From regular structures , very calculable to structures where we have a vague idea of whats going on, to knowing something weird is going on but we can't get our hands on it. Is it possible to use the re-entrant equations to do rendering ? If you take any polynomial , you can do the same sort of thing, you generate the Julia Set by using the same basic idea, take a point , following it by the iteration, either it goes off to the distance,white, or it stays bounded ,black. Some of the shapes that have been in the backgrounds of pictures here, are Julia sets of particular polynomials. For each polynomial you'll get a pic. The MS is just a very particular case of it. The fact you are doing it in 2D is just because that is what we can draw. If I take any dimensional space and I take a function on that space, start iterating and following points, I can build something. The first demo of mathematical chaos , due to Karl Lorentz using a weather model. There isa fractal object sitting in there as well. At a stage he had to reload , the next day, what he'd been working on previously and rerun it and got a very different pic. He realised the very small differences you get by truncating things at 10,20 or 30 decimal points , were having a massive effect on the outcome of the system. So Lorentz and his butterfly were the first demos of this deterministic chaos in systems. Its just that 2 is the dimension that we see. Drawing in 3D is hard. You've drawn some analogies between fractals and every day life, randomness and Darwinian evolution , they seem mor emetaphores than mathematically rigorous. What are the practical uses of this M, some practical examples? Nope, not being flippant, but I can't. Prime number theory is used every day , Amazon purchases etc? Number theory, graph theory which is what I teach at the moment, very practical applications . Network theory, how you route things through systems of nodes/roads? etc , fractal analysis is something that its primary use is giving us a language to describe things rather than a way of attacking problems. Its not developed to the likes of number theory where we can go from understanding numbers to building unbreakable codes. i think the closest we get is trying to understand notions of randomness, still rather esoteric from a procatical point of view. There are times I'd like to view myself more as an artist than a M. Why should we care about fractals? We don't know what someone in a few years will be able to do with some of these ideas. Part of what we do in math is saying , here is a practical question. One of the guys in the applied group of the dept had a grad student who was looking at the M of air-bubble formation in crumpets. You want all the holes in a crumpet , equal sized . Actually a PhD thesis of a quastion. not everything we do is immediately practical. We're exploring what the universe of math is telling us. After us, others will latch on to particular bits of our math, to answer their questions. Not every bit of math we do, will find a home. Some will but not necessarily in our lifetime. We just don't know what the useful math of 20 years from now , will be. Some people are doing very practical things, and some are doing the exploration of the universe of the possible, so when people need tools , there are tools available to them, to do the great thing they are trying to do. Without this, there will be people of the future asking quastions and there won't be anyone to give them an answer. Can you do this backwards, something that looks like a fractal , can you work back to the underlying math? Yes, sometimes you get some very interesting things through it. So look at neuron formation , what are the processes underlying that, can we get a handle on that, can we use that to understand things . The early people to look into iterative function systems, without it specifically wanting to build a fern , give it some rules on how to build a fern, a different set of rules to build something else. Bu tunderstand what it will build , from an initial set of rules, there has been some basic work on that. It gets very complicated, very quickly, and not always predictive. With ferns are they truly fractal or are they fractal only u pto a certain amount.? From a M point of view, nothing in the physical world is truly fractal, because of the constraints of the physical universe. The universe is quantised, its not continuous. When you say scale, you can get to such a small scale , that you cannot replicate things to a larger scale. Most people would say you don't need to go that far, look at lots of varying scales, and see the same sorts of structures and be able to apply the analysis , that people have developed for handling fractals, not necessarily being able to go to every scale , but many scales. Is there a connection or a parallel with series theory? Such as the Koch snowflake , taking the infinit elimit , have c;lose parallels with series. There is basic notions that underly series and fractals and these constructions, where there is a commonality, of how we're doing things. There are underlying mechanisms that we use with reckless and wanton abandon. For the math derived fractal sets like the MS, I believe the coloured versions are just assigning colours to the number of iterations. Is there a more aesthetic process than the lumpy clour gradations , giving a nicer image.? I think that sort of image is pretty beautiful myself. I think that is just due to how they set up the grid sizes . I think you can do that sort of thing but it comes back to the basic question . I have a continuum of possibilities and I'm putting it in a coup[le of buckets and the differences where I go from one bucke tto another are going to be fairly stark. I think you will get that lumpiness of colour, regardless of how you do it, because you are trying to assign colours to things. You might get interesting boundaries between one colour and another but if I'm using 5 colours ., I've an infinite number of things and setting it into 5 colours, so am bound to get some lumpiness. Is the coastline of the Uk a fractal image, can you derrive an iterative function for the coastline? I think there are estimates that the coastline is 1.2 to 1.3 dimensional , but it gets back to this fundamental of one scale or many scales. Its lumpy but not as lumpy as the MS. For the Cantor Set, taking out the middle third and the next iteration, you take the middle third out as black bits and fill in the middle third of the white bits, you end up with ever decreasing dashes . There is clearly a difference between these 2 approaches and again for the triangle, empty triangle in the middle and put a filled triangle in there , instead of punching out, it iends up looking much more regular, than the fractal set. ? THe Sierpinski set is very regular loking. If you're doing a slight variation of that construction you'll probalby get something that is differenrt, but still very regular, in the same sort of way that the Sierpinski curve is regular. You can do all sorts of variations on each of these constructions, give yourself a finite set of possibilities, pick one at random , do that thing at each stage. Whenever you have that finiteness of a set of possibilities, you have a possibility of getting a control in the end. When you have an infiniteness of possibilities , that control becomes more difficult and the calculations become much harder. Is there a way of looking at your original function , produce an image from that, but predict the degree of curviness or holeyness? No . If you look at just quadratic polynomials some of them , the Julia set, would be a nice connected piece, for others it won't be . You can get a huge range of diferent sorts of shapes coming out of that. There's no overall determinism for that? There are people who try , I don't think they've succeeded yet. Its complicated. Its not clear what information the degree is actually containing , that gives you control of the outcoming object. Its one of those questions we've not completely resolved. So veering into the chaos sort of direction. You dinner-plate Hausdorff dimension determination. If you took the Koch snowflake , presumably you will need an infinite number of circles to cover the triangles? No , using stuff i haven't told you anything about. When you have a set like the Koch, it has a property know as compactness. For compactness, I cover it with an infinite number of plates, there is a finite set of those plates, that still cover it. Compactness is a hard and slippery notion. The difference between a Koch snowflake that is contained within a paper sized thing and a line which just keeps stretching out. Compactness is trying to capture the fact that it is contained within a sufficiently large piece of paper. Those sorts of objects have this property, that however you cover them, a finite set of thos plates will suffice to cover and you can throw away almost all of them. Id doesn't matter how you do it. As long as you cover it initially, with your plates, you can find a finite set. It might be an incredibly large finite number of plates, I'm not saying 5, i might say 57 trillion but there are a finite set. The plates have to overlap , to cover the object. The Mandelbrot formula, how do you come up with that? Its about the simplect thing thats interesting. If I did f(z) is z + c, and do the same thing, you don't get anything interesting, because everything disappears off. z^2 + c is the simplest formula , do the process, gives us something interesting. If i used a different more complicated formula, I'd have something different to the MS , but the same sort of behaviour. A set where things stsayed in a neighbourhood and a set theat went off to infinity and a complicated boundary in between. The MS was the earliest image produced because it was the simplest thing they could try. The fractal is really the boundary , look in closer there and I'd keep seeing what looks like the whole boundary. The points on the boundary stay bounded , but might go out a bit and come back, but always keep coming back. But move away a tiny bit, but in a particular direction and then they disappear. A feature of fractalness is , however small a change you make, make it in the right diredction , you get a completely different behaviour. So not the case of things starting nearby each othe r end up close to each other, thats completely broken. Do you think its just an artefact of human perception that we find these attractive? Yes. We draw them in niftey ways and use nice colours , it looks like there is a light behind and its shining out of the boundary. Part of it is they just look strange, not in a frightening way, but again thatas a subjective judgement. Its niftey becaus eits where I've chosen to work so I have a deep personal bias to these things. Would you play with them if they were ugly? You've never found any ugly ones? No never. What about the zeroes of the Rieman zeta function?? Thats a whole other talk. Its beautiful but its hard and scarey but there is a beauty to it. Its niot a fractal thing. Are the numbers that you're plugging into that iterative formula , integers or real numbers? Neither, they're complex numbers, points in the plane. I can think of them as points in the plane, I know how to add or multiply . For each point in the plane i can do the iterative process. I have a yes/no answer to the question , do I stay close to the place I started from. I'm colouring the MS by the answer to that question. For practical purposes a complex number is made up of 2 real numbers , but are they just integres? They are fractional numbers. So is the pattern that you get, dependent on the precision of the arithmatic ? I'm assuming infinite precision, i can do these calculations to infinitely many decimal places for every point in the plane . In the fractal universe this is false, but I do this sort of stuff all the time. I believe they looked into using something like this for compressing images? I just don't know about that. Have you put in a function, worked it through , got an image and gone WOW!, never seen an image like that before? I'd not seeen that image before (still on the screen) but I still think that is a wow. One or twice I've done the I wasn't expecting that. Towards the end , you had one I'd not seen before, very curved ? How was that one built (the "wrought"-irony/ Paisley one) ? There is a way of building things like this, which is , take a bunch of circles, where the circles don't overlap but they can touch/tangential. So a string of beads, big or small beads, then start reflecting in the beads . There is a way of defining reflection is a circle. Reflect to a line and you just flip one thing over to there . I can define reflection in a circle, things stay on the same line out from the centre but just flipping them. Just keep doing that, and you get shapes similar to this. That one goes back more than 100 years, interestingly. Way before computers, there were smart guys who figured out how to do things, prior to computers. Hand drawing such reflections is quite an efficient way of drawing such as this. Does this sort of leaf-like geometric patterning lie behind Paisley wallpaper designs? I love Paisley . I've thought about selling such images but I don't think my colour sense will lead me into fashion.

Monday 10 Apr 2017, Prof Anneke Lucassen: Cancer Research UK and the 100,000 Genomes Project 20 people, 1.5hr The incidence of cancer C, in this country is going up. 2014 the last year of good statistics, just over 350,000 new cases of all types of C diagnosed . The risk is higher in men than women still. The incidence list has gone up by 12% since the early 1990s , we don't quite know why. Probably a combination of some environmental factors , better at detecting Cs which might have gone away by itself. We're not dying of other things first. Go back 100 years, lots of us would have died from other diseases before we got old enough to develop C. C survival is improving, overall, total average of all Cs is 50% of people will survive 10 or more years in the UK, that has doubled over the klast 40 years, due to treatments and earlier catching of Cs. There is huge variation in survival between different C types. Certain skin Cs have a very good survival rate, and brain tumours have a very poor survival rate. They are completely different diseases and talking about them as one doesn't make sencse. C is a disease of cells. Any cell that grows uncontrollably can become cancerous. Skin cancers, leukaemias where blood cells overgrow and become cancerous . Gut cells can become cancerous , develop a bowel tumour . Nerve cells can become cancerous to develop a brain tumour or a glyoma for example. Cell division is very important in C. We need cells to divide, to grow from baby to adult . We need cells to divide to heal when we are cut. And to replace general wear and tear in our bodies. Every time a cell divides it has to copy itself , copy its genetic material and a chance that something goes wrong in that copying process. Whilst I've been talking we've all made about 1/2 million new red blood cells , to give an idea of the scale. 12 million new gut cells , all happening routinely in our bodies. It is routine and controlled, a sytem of trafic lights around our cell division , saying go or stop , finely balanced. When that balance is interrupted and the stop signal is interfered with, for a variety of different reasons , thats what goes wrong in C. Then the uncontrolled growth of cells , that then compete with other cells around them. Squash surrounding tissues or spread to other parts of the body. Its not allgenetics that causes our cells to divide out of control but it plays a part. The influences that can make cells divide out of control , because of faults that have accumulated in that DNA. Environmental influences are important, hormones can play an important part , eg estrogens and breast Cs a clear link. Take the contraceptive pill or HRT that has an influence on the accumulation of faults in our DNA. Lots of natural self-regulations. If you copy your cells , by dividing , then things can go wrong just by chance. ur immune system is more important than weoriginally thought in the developement of C, in particular certain virus infections. From damage to the DNA, C can arise. I will focus on inheritance, picking up on the bits that are important and those that are not. Nearly all our body cells, look into the centre , with a microscope, the nucleus ,inside that are thw chromosomes which are bundles of genes together with bits between the genes, the chromosomes are made up of tightly wound DNA. The DNA is joined together by the DNA letters , joining the 2 strings together. That is what we talk about as a sequence of DNA, 2 billion of those letters per cell, composed of 4 different letters . Those sequances of code, determine the messages sent to our body. If the messages go wrong , thats when problems can arise. The exome is 20,000 different genes, that are sections of that DNA. The genome is all our genetic material in one cell, all together, the genes and the bits between. The word genome derrives from the words gene and chromosome. Just 1 letter change in all that sequence can be enough to cause really dramatic changes to our bodies, but it all depends on where that letter change occurs. All of us have several different mutations within our genetic code. If those occur in points of the code that don't do much , then no consequences. Some of those changes can occur right now as I'm speaking, a mutation in one cell then copied to the daughter cell. Some of those mutations are inherited from our parents. Inherited in our cells, 2 copies , one from each parent. Often if you have a mutatuion in one copy , that might disadvantage you but alone is not enough t cause a problem because the other copy needs to be knox=cked out. The other copy can be sort of rescuing the bad copy, or the bad copy over-riding the normal one. For different diseases there are fdifferences there. For C , often the case , you might inherit one copy that puts you at a disafdvantage but its only when the other copy is knocked out by chance or radiation exposure or something like that, that the C arises. All C is genetic but not all C is inherited. Any C arises as the result of genetic faults in the dNA but most of those faults are not inherited. The difference between inherited forms of C and chance or sporadic forms of C is if you have inherited a C predisposing gene , you start off life at a disadvantage. In order for the C to arise you need more than 1 mutation or bits of damage to the DNA. There is a required sequence of lots of different steps to arise before the C starts. Then if you have inherited one of them , you start off disadvantaged. Thats why i nthe inherited forms of C we tend to see C at a much younger age, that in the sporadic forms of C, because they started with a disadvantage and needed fewer steps to accumulate , before the C arises. When we talk of inherited Cs , thats not new . Aldred Warsin? described a family beteen 1895 and 1915 who had very young onset Cs. This involved bowel and womb Cs, he described it as an unusual combination of Cs, we now know today as Lintz? syndrome or heriditary non-polyposis colorectal C and we know the genes that you inherit t5hat can cause that. But really we've known about this for over 100 years. And other examples of familial Cs , we've known about for a long time , from family histories, that there must be an inherited component, but only in te last few decades have we found out what that component is. For breast C an old headline " Her mother died of it , her aunt has it,she has it, and her 3 daughters" accompanied with the fact that once the gene was discovered, the test for that woman spared her from the risk in surgery that she was going to go for, because of her terrible family history. She had not inherited the gene that was in her family. The Angelina Jolie effect , she had a BRCA1 gene mutation inherited from her mother . Her mother had ovarian C at a youing age and a wider family history of breast C, after had a genetic test which showed what the cause was in her family and Angelina went on to have a predictive test for BRCA1 which she had inherited the same one, and she went on to have risk-reducing mastectomy and risk-reducing removal of her ovaries. The demand for BRCA1 testing and the similar BRCA2 gene , went up dramatically after her story. We receive lots of referrals to our genetics service , please test this person for these 2 genes. A good thing in the sense that she raised the profile of people who previously were not getting appropriate testing. But, what many people don't realise, these 2 genes only explain 5% of all breast and all ovarian Cs. The majority are explained by other causes. Its not even staightforward to do that test to find if you are in the 5% category , because the 2 genes are both very big and the inherited bit can be different in each family. So the lab has to trawl thru more than 10,000 letters of genetic code in each gene and look to see if there are any changes in that gene that have been inherited , that might explain a family history. We all have those 2 genes and we all have some variation in those genes and the lab has to try and decipher what is just normal variation and what is causing the high incidence of breast and ovarian Cs. The more we test, the more we realise that we find a variation but it does not mean much. So we have to be very careful about saying if someone is BRCA1 or 2 positive, because it may be a spurious red herring finding. I spend a lot of my time telling women , intending to be tested, that it is not as simple as they think. 1 in 3 of us will develop a C at some point in our lives and across the board for all Cs 95% of those will not be due to a single inherited factor. So in 95% of cases , there may well be an inheritred componet but that component is very complex consisting of lots of different factors interacting in ways we don't yet fully understand. Part of that interaction will also be protection. One gene protects a bit here , that one increases your chances or protect in another environment. We just don't know enough yet to put all that together into 1 algorithm that says with your particular genetic combination and your particular environmental exposure in your lifetime, this is your risk of C type x,y or z. But the headlines make it sound that we are at that point. The press are more responsible these days but they often mae it sound , we found a new gene, go to your doctor, get tested for that gene and you will know or not whether you will get c. Its not unusual for someone coming to a clinic waving a paper with a headline like that, can I have a test for these new found genes please. From a research point of view, finding a new breast C gene is helpfull as it gives insights into the mechanisms of the disease, but it often fails to translate into a useful test, unless it is a very high risk gene. If the new found gene increases your risk over the next 40 years, by 1% , thats not clinically useful test to have. Similarly for bowel C for example. Its that bit that is not always conveyed by the media reports. The is the Kylie Minogue effect. She had breast C 10 years before the AJ effect. She also had a gene mutation test , but her test was looking at expression of a [articular gene on her C , so she could receive a targetted treatment specific to that gene mutation. Her gene was not inherited , it was the result of the uncontrollable growth of her breast C . THat was a herceptin gene expression, that meant she could be treated by herceptin , as that blocks the growth factor on the cells and shrinks the C cells more than normal cells. Thats what we are aiming for, targetted treatments. The testing is easy but the interpretation can still be difficult. The tech is ther to sequence our code, just like that, but the problem lies in the interpretaion of the results. There is a realistic promise there, but the practise tends not to deliver , like the headlines would imply. James Watson , DNA discoverer, " we used to think our destiny was in the stars, now we know its in our genes" . Now we can sequence our DNA we will know what our future holds. It is more complicated than that , we do not have the crystal ball as part of this process. We might do better to remember a quote from John F Kennedy , 30 years earlier ," the greater our knowledge increases, the more our ignorance unfolds". In the genomic age, that is very true. We know more and more, we test more and more, massively more data, but what that often does is expose what we don't know better than before we could do that. In the last 10 years alone a 10,000 fold increase in the speed and decrease in the cost of genomic sequencing. In 2001 cost 3 billion dollars and several years to sequence 1 entire genome. In 2017 you can do that for 1000 dollars , still going down, and do several in a day. A phenominal scale of change. People assume if you can do it faster , you get answers quicker . But you gather a whole load of data and lack the interpretation. |To interpret this, you need to do lots and lots of clinical investigations, including other family membersetc, and the overall costs can really rack-up. An analogy is comparing fishing and trawling. We are no longer fishing for genes that we suspect are causing something from a family history or an appearance. Say we have someone who has something like the appearance of Down's Syndrome, we know what bit of the genetic code to home in on. If you tart off , not knowing where the gene may be, trawling the entire genetic code, you have a cost effective process than for your single fish. But you get all sorts of fish that you don't know how to cook, maybe poisonous , old boots, unexploded bombs, all sorts of stuff analogous to trawling. In the USA they are a bit more free and easy with their testing compared to the NHS here. People pay extra money for a broader gene test, but they don't have any answers. They find risks at most, when they were expecting answers. Many headlines in the US, expressing surprise from the people who pile into expensive testing and get no answers. The iceberg is also quite a good representation. The bit that sticks up over the water are the people with a strong family history of C , or a specific set of signs or symptoms. They are more likely to have the strong genes that give strong predictions. That family in 1895 were sticking out of the water. The vast majority is below the surface, much less tangible you don't know where it is, the weak genes and environmental factors that interact in a very complex way, that give poor predictions in the clinic. We are tackling some of this hrough the 100,000 genomed project,GP. Its looking at the lower part of the iceberg or looking insode the trawl net. We are focussing in on a certain group of NHS patients that are coming through the doors anyway, that are'nt geting the answers from curren NHS genetic tests. For those people we will look through their entire genome , 3 billion letters of it , and see if we can find anything there that explains their particular condition. Divided into 2 groups, rare diseases and the other is Cs. The 2 are very different. For the C patients , we sequence the genome they've inherited ,in every cell of their body and comparing that to the gene of their particular C. The comparison will hopefully give us clues where to target as well as how it may have arose. In the rare diseases , there are a lot of individually rare diseases but put them all together , then relatively common . 1 in 17 people have a rare disease. If we've exhausted the normal testing , then comparing (the often) child's DNA with the parents genomes, might give us important clues. The whole project announced in 2012, took a while to get going. A lot of investment, the plan was 100,000 genomes in 70,000 patienrs. In C studies 2 genomes from 1 patient . 13 different genome centres around the UK and several industry partners , deliberately brought on board to try and encourage the developement of a genomics industry. The Chief Medical Pfficer established 3 advisory greoups to the GP, an ethics group, science group and a data group, importanyly they interact. I'm on the ethics group , so some intresting insights into the ethicl discussions about this venture and testing. 4-fold aims. To create an ethical and transparent program based on consent. This was an offer to patients , they could only take part , if they were fully informed about the implications. Bring benefits to patients and bring a genomic service to the NHS. And be a first in the world to do so. A lot of genomic ventures around the world as part of research , but within the NHS we'll be developing this as a diagnostic tool. The hope was to stimulate scientific discovery and medical insights by doing that and to stimulate UK industry and investment . Scotland is now on board as well , and Wales. Amy has a rare disease- she will give a blood sample which is representative of her inherited DNA, could also be a cheek swab. Then if possible the genome from both her parents to compare it with, to rule out normal variation. If we found something in Amy that loked suspicious , like a missing bit or an extra bit, then we check both parents and find one of them also has it, then the signic=ficance goes down. Wheras if its new in Amy that is much more important. The more we analyse our genomes , the more we realise that the variation is much more widespread than we intitially thought. The is a study in the USA looking at healthy octagenarians , analysing their code and they're finding all sorts of mutations , bits that would predict nasty diseases and they are healthy . Our ability to predict, from changes in the code is not as nearly as good as we originally thought it was. For the C patients, DNA from the normal cells , unless a blood C, then compare to their tumour. There are 2 routes into the GP-C . The familial Cs go into the rare disease branch, like the 1895 family. If you have a C then you go into the C arm, with a different type of investigation . With more knowledge about the Cs, then the blunderbus treatments of the past , can be refined a bit and made more targeted. You kill of fthe C cells but kill off a lot of other celss as well, why your hair falls out, you feel miserable . If we can target the C cells only , then that is far preferable. The GP project will collect medical details of the individuals along with the genetic data. That means we cannot anonymise this genetic information, it is identifiable. So the data control is really important. The sum total for the UK is now into the 20,000s , its going well. Locally something like 2000 roughly. We've relatively few results at the moment. This is to be expected. 3 different types of resuls that may come out of this. The main findings are why you'vr gone intio the project in the first place, then a bunch of additional findings that are nothing to do with going into the project in the fisrst place, a sort of lets offer you an MOT while looking at your genetic code, to see if there is anything else wrong. That was controversial as to whether it should be disclosed automatically or whether people should be given the choice or whether there is a choice about unknown unknowns. Then some additional findings along the lkline of if "Amy"'s parents were intending to have more children, then both would be checked to see if they were carriers of a particular condition, eg cystic fibrosis, to see if the risks to future children were increased. The controversial bit about that, was the results would only be given , if both members of a couple are carriers. If just 1 is a carrier then the future risk is not increased and that result would not be disclosed. This project is not a pure research or pure clinical venture, a mixture. TRhe rules and regulations of both are very different, causing no end of confusion to hybridise the 2. The aim to get direct clinical benefits to patients is clearly a clinical im, its fundamental to the NHS. But the aim to make new discoveries and understandings about diseases is purely a research aim, not what the NHS is set about to do. To develop a genomic medicine service to the NHS is a clinical capacity building aim and support companies and researchers to develop new medicines , therapies and diagnostics is very much an industry & research aim. So a lot of questions about how someone can consent to all of these , in 1 go, in a meaningful way, when you've simply come in for a diagnosis. Is it really ethical to offer someone a complete genome test that might help diagnosis when they can only take part if they agree to all of these. An all or nothing project, sign up for all of it or none of it. So a novel hybrid of research , clinical , service developement , industry capacity building. Exciting but I think it also has its problems. We are trying to target drugs to deal with particular Cs. Say for patient" A" an ovarian C and they have a particular DNA variation and drug A is developed to deal with htat situation , not for anything else. Not a blunderbus treatment, focused on that mutation. Patient "B" has a different mutation that leads to the developement of drug B . Patient C might have a totally different type of C or a different location but be the result of the same mutation. So looking at the mutation rather than the clinical picture can be helpful to know what drugs to target with. Or the same brain tumour in 3 children , might have a different mutation profile in each child, meanwhile 3 different tumours,in different places, in 3 differnt children might have the same mutation profile. We are looking at particular markers that may something about that particular C, markers for particular drug resistance and markers of particular side effects, they can then be stratified into different types of patients and each gets his corresponding tablets for their C. The use of genetic data and medical records is a topic of great debate , at the moment. The scandal around the caredot data ? issue where the govt had to backtrack pretty swiftly , about sharing medicine info is relevant to this new venture, that wants to gather data from the population and link that to medical records. A rock and a hard place situation because , without that massive sharing process , we will never know the answers. But with that massive sharing , there are risks of privacy breaches and how do we allow people a meaningful choice but at the same time get everyone buying-in. People started opting out in the caredot data situation , then the data resource is not going to be there to be useful to future generations. Big data is crucial to the understanding of the bit of the iceberg below the water. So its great with a very strong genetic character that causes a very clear clinical picture, or strong family history , but the more subtle interweaving different factors we've gotr to collect data on a large scale. It may be that national is not enough from just the NHS is not enough to get statistically significant data , and we have to go international, and then crossing those boundaries exposes a load more problems. So how can data sharing be developed , retaining trust and confidence of public and participants and that is a moral, regulatory and technological challenge, with no easy answer. In my group in Soton , we're looking at the people recruited to the GP , asking them some of those questions. Through questionaires and more detailed interviews to see what people think. Some early findings is what the health professionals and the researchers expect patients to say isn't necessarily what they say. Picture of a man walking his dog alongside some water and the dog is in the water. So should I tell him. A nice analogy for the genetic code situation . He might know, absolutely comfortable with the fact his dog is having a swim and knows the dog is there. Or the dog might be struggling for life. The issue about analysing sonmeone's genetic code , finding something out about them , maybe relationships to other people , raises the same sorts of questions. When you are a holder of such info, do you tell people , or is it something they don't need to know , something they don't want to know or want to know everything . All sorts of ethical and privacy questions arise , moral issues, insurance issues and potential minefields. I run a group called the Clinical Ethics and Law Unit at Soton . We do research focussed on the ethical issues , raised by genetic and genomic testing and all sorts of interesting issues about how info is shared within families . Q&A Perhaps you might like to say something about epigenetics and the way scientists have been humbled after they said a lot of junk DNA does nothing, and now they find it does do something? And the ethics of telling people, I had some 23&me test and they have a part where you can look at it if its serious or not. I wanted to look at it as you can always adjust your life style with the foreknowledge.? It can be better to know and it can be worse to know. If thr eis something you can do about it, the argument is much stronger. A treatment, an intervention, a lifestyle adjustment that may change that. There ae bits of your genetic code thay might tell you are at risk of something , that you can do absolutely nothing about. It may never eventuate anyway. 23&me does alzheimer gene testing and at the moment there is no treatment for that. It might give you the opportunity to say yes or no about finding out. But when a number of members of a family do that test, then you have to tnink about other peple finding out. Were you only testing people who cane to the hospital or from the general public, as I put my name downcfor it and never heard anything about the GP.? Its not the general public , its people with particular diseases . Does it give a bias, that way? The aim is not to look at the whole population , lets look at the low hanging fruit, if you like. If we look at the whole population, we will find a lot of gnetic variation , interesting, but here we're trying to find new diagnoses. Epigenetics and junk DNA? Epigenetics are things that affct the expression of your genes , without changing your code. So sommething binding to your code , alters the regulation of a gene, that is farther down. It might be something sticking to your code and silences a gene or makes it over-active. Epigenics is often propogated across the generations , such that if you inherit a particular sequence from your mum , it behaves differently if you inherited it from your father. The exact sequence might br the same but because of diffeerent things binding to it, that we cannot stil ltest in a whole genome test, it will behave differently. There is a rich and emerging study of that. Originally the GP was to collect what was called other-omic sam[les, but in practise it has been too difficult to do, its still an aim , but not happening routinely at the moment. Junk DNA was a term used 20 or 30years ago , genes send the messages , when genes go wrong , the message goes wrong - nice and clear cut. The bit in the middle doesn't do anything , actually we now know that the bits in the middle are often important agin in regulating things if something is bound to them . You might get a promoter of a gene or a silencer of a gene, thousands of letters away from the gene itself. Only now are we finding what and how it does. There must be bits of DNA in me that are silent, never do anythiong but in someone else wil ldo something. Junk DNAd oes exist , just much less clearly delineated than we originally thought. That swhere the JFK quote comes in nicely. The very basics. I'm assuming that C starts from 1 errant cell but can I also assume that happens quite often but never develops to 2 cell or 4 cell , so epigenetics can come into play in that early stage. ? By definition its not a C then, it is not growing uncontrollably. The pre-Cs may go away by themselves . For example a very coomon ductal carcenoma in-situ in a woman's breast will , we think, often regress by itself. But now we are better at screening for things , its a rare surgeon , who would leave that untreated, because it might go on to be a full blown C and spread to other parts of the body. You've got protective factors , control mechanisms that , may allow things to wrong for a little bit and tyhen kicks in, and retains control again. THe immune system is vry important there. The more we learn about it , the more we realise some of those stop checks and signal is your own body recognising that the cells have changes so much , that it looks like its infected, and so needs attacking. A good control mechanism that needs getting on top of. So young kids or teenagers , they all could potentially have a C any day , a number of times a year, but it never develops.? Yes. If youve inherited a mutation , that just starts you off at a disadvantage. You may not even with a really strong BRACA1 mutation you may have enough protective factors around , to never develop that C. Typically how many point mutations does it take to get to , presumably on some occassions just 1 critical mutation might.? In the classical types of C, 1 mutation , such as described by Nudson? a retinal blastoma , a childhood tumour where you're born with 1 mutation and just waiting for the othe rone to hit , and the same copy of the gene , so both of your genes are knocked out and you develop a tumour at the back of the eye. So inherited 1 , that alone isn't enough and the second 1 is a chance one. In say the case of bowel C, people don't know what typically is, but 4 or 5 is usual. It depends where they happen, just drinking our pint of beer we might be knocking off a few, starting a few mutations off, but its the critical bits of the DNA if they happen, then you need far less hits than if non critical bits. So you're saying , a single point mutation can't give you a C, unless you already have one? I just don't know, I don't think that study has been done. I think its pretty unlikely that 1 point mutation would be sufficient because you still have another copy of that particular gene, that is knocked out. That depends on the bodies physiology, being able to say we won't use that one, we'll start using that one? Thats part of the deal of your bodies physiology, it does that, yes. Again it does depend on the gene, but for most of them, the point of having the other one, it can compensate. Sometimes you're right one gene is so bad ,it overwrites the good copy, but that is not the usual mechanism for C. When I looked at mine, some things can balance out. I had haemiacromatis ? where the body takes in too much iron and the other was a type of mild leucaemia, another one was thrombosis , a lot to do with the blood.? The 23&me originally started as looking at your ancestry , how much of a neanderthal you were , block background. Then it started looking at common variations and those genes subtle risk factors , they are nt high risk predisposing genes. Nothing in 23&me apart from looking at some Jewish mutations for breast C, all the rest are subtle risk factors, that don't do very much. The problem is, if you have an over the counter test, to tell you a lot of medical info , it can only go so far. Thats where it came into problems with USA ,the FDA first said we don't want you doiung any health related testing, because we think that should be handled by the healthcare system. just recently its been approved again, but I would urge caution. Because they're selling point is knowledge is power, an easy slogan to buy into , but power to do what? Its all very well, doesn't cost much , probably not going to harm you , but will it benefit you much. They came up with Alzheimers, Parkinsons BRAC1 & 2, so i'm watchong my diet and it does make me research internet things that are happening in those areas , like anti-malarial drugs against Parkinsons. Of the people consulted for the project what proportion decided not to sign up to it? Very few people said, I'm not signing up for that. The question is are we in some way coercing people to take part. They are people who have come throught the health service , not the general publinc interested in finding out. They are ill or an ill person in their family and they want a diagnosis and this a way to a diagnosis. But at the same time they have to sign up to all the other bits. But have we twisted their arms into taking part , when under totally neutral circumstances , they would not have joined. A few have said the whole data-stuff is too much for me . There are quite a lot of people who don't turn up for their appointments, so they might be voting with their feet. But of those they usually do rearrange for another appointmet. Of the people suspected of C, across the board , pretty much everyone says yes. A lot say they go ahead becaus eit will help advance knowledge. You're not going to say to a C person, this test is going to revolutionise your particular treatment or diagnosis , its mor efor the future. The rare disease arm , including familial Cs , its much more sold to people as a potential diagnosis , that they won;t get throgh the health service. If someone has developed C ,brought about by smoking, would he be invited to the study? Probably not. There are very specific recruitment criteria, that have broadened a bit after realising how difficult it was to get people to take part. But we as health professionals , finding the right people, in the right ciircumstances . We are looking fo r people t o offer it to , rather than those we are forgettingto recruit. Its probably true that the people focused on this project , recruit people to it, wheras a jobbing GP or non genetics medic might not thionk about that. eg psychiatrists , certain psychiatric conditions are elligible to be recruited, but I'm not sure there is much flow from psychiatry into this project at the momet. Do the medico/genetic R&D companies , do they get access to biopsy samples , to try their medicines on or do they get potentially nice compliant guinea-pigs to try their medicines on? At the moment the GP is organised like a reference library. You can go in , read the book , but you cant take the book out. That is to reassure people, they don't have access to the patient to inject them with all sorts of drugs, just to look at their genetic code and perhaps just the results from biopsy samples, so they never get biological samples. The gut genome seems to be coming influential/fashionable. ? We don't have the evidence yet to see how influential, certainly lots of headlines. It is promising but not influential right yet, as we don't yet know what it could influence. That is looking at your microbiome , your gut flora . Again it seems a bit like junkDNA , as we used to think. What you shit out is out of your body and now irrelevant, but it turns out , its important what the balance of the bacteria in there is. There is nothing in pour body that is straightforward, and working in isolation. Its a subtle set of checks and balances, and very rare that you can dsay , this factor will cause that definitely. That factor in conjunction with other unknown factors might increase your chances. Perhaps a quantum computer will be able to sort it all out? Thats what people think, in the more data they get, the more likely we will get an answer . But i suspect a lot of these things will not be amenable to computer power, so many variable factors like the environment in your mother's womb, where you lived, your particular mix of racial ancestry, diet as a child , the food you had yesterdy. Genetics and the environment, you touched on , but much research there? How you document people's exact environment , unless they are strong influences. Look at smoking and how long it took us to make the connection between smoking and lung C. A strong risk factor, imagine a risk factor much more subtle , identify when its a risk factor and when its not but is protective factor. We're now in bigdata world, and large cohort studies over the years, I wondered if that could be tied into the gentics? I don't know about better but in combination , cohort studies ar e very important. There are lots of moves to do genomic analysis of cohort studies, definetly. A lot has been written about the difficulties of just say a prope food intake diary, and make that reliable. Cohort data is probably the best stab at the moment, but its still easier to find the big strong factors than the subtle ones. We tend to forget that genetic factors can do 2 things, they can say increase the risks of C, but on the other hand they can decrease your chances of something else entirely. How do we balance al lthose things out. We often see it in families where they've inherited something that sounds really awful , why has evolution not got rid of this, probably because its also protecting from something else. Could you explain what cohort studies are? A posh word for following up people or families , over a long period of time. Rather than saying we'll take 100,000 people , and analysing their genome. Like the Southampton Women's study , following women, as they have children, then 5 years later, 10 years later. The POSH study is a breast C study , the age of diagnosis and their genetic code, is not a cohort study. It can be a very specific cohort , just people with osteoporosis study . We're watching that specific population to see what happens in their futures. So a statistical analysis , when you've collected the cohort. It could be people born on a particular day in 1958 ,then follow onwards. Some of those are still going strong . I think setting them up now is much more difficult , people worry more about the privacy, data protection etc. The dietary studies are funny because the ask like, what were you eating 5 years ago? You just don't know. And you don't even know what they were putting in food 5 years ago, put it under a different name even? Also what we cooked our food in, say aluminium pans, the chances are we absorbed Al which may be very bad for our health. But its more likely you will take in Al if you cook acid food . Factors like that will interact with your genes. So we could think of a particular toxin , get 10 people to eat that toxin, and some people it won't affect at all because somethong in their genetic code that protects them from it, or it does not do the same genetic damage. Like sickle-cell anaemia and malaria, protects in the carrier state against malaria. There was a major public health incident in Camelford , Cornwall wher e a lot of people took in a seriously abnormal amount of some sulphate in the drinking water. At autopsies later on , illustrated that. Would a follow up study of that population have been a cohort study? Cohort is just a longitudinal look at , rather than a cross-sectional look out. If you followed them over time that would be a cohort study. It is a fairly loosely used term. You can go in and study 100 people, thats just doing a test. If you follow 100 people , however you might have selected them, then that constitutes a cohort. Then there are people who come in from another area and mix up the gene pool again. If that is not taken into consideration, how fare your results? For example thw Asian population coming to the UK, their incidence of certain diseases change quite dramatically. So we thought that must be due to environment, it can't be genetics. But its still a combination of the 2. Nothing is ever just genetics or just environmental, maybe the majority to one side or the other, but always a mix. But you can get a prevalence of a disease gene mutation in Ireland , Sweden and Japan but they are all geographically separate? If you look at smoking, we will all of heard my grandad smoked 60 a day for 60 years and he only got C whan he stopped smoking, or he never got C. There will always be people who can do some bad thing like smoking and get away with it. Probably because there is something in their genetic code that protects them from the damaging effects that get to other people. There are about 300 different things in cigarettes? You talked of the study of healthy octogenarians , who had gene faults but never materialised into anything. Are you concerned that with the advent of genomic medicine becoming cheaper and quicker, do you think there is a risk or danger of premptive or preventative surgeries or treatments, might happen to people that would never need such interventions? Yes. Thats where the fishing v trawling comes in. Styart off with a very strong family history, then you find a mutation ,then its a pretty good bit of advice , think about risk-reducing surgery for example. But if you start by analysing the genome, and finding an alteration , then the data coming in now shows that they're ina different boat but they feel themselves to be in the same boat. A woman with a BRAC1 mutation from a sequencing but without a family history , might think she has the Angelina Jolie gene, but the evidence seems to suggest that that woman's chance of developing breast C is much much lower , than someone who comes with a strong family history, because she has other factors that protect against it. So doing screening for the whole population , then having your breasts or ovaries remoived is going to be the wrong advice . Its such an important point and we've not got there yet. The business of additional findings from the GP , people coming in with a child with learning difficulties , offered a BRACA test , as a by-the-way freebe, those women may not go on to develop breast C , but if she gets that result and then feel as being in the Angelina Jolie boat, they are likely to seek that sort of intervention. I'm worried about that. Do you think there are enough safegaurds in place? No I don't , not at all. Hopefully we will get there as mor estudies come out . Its now been calculated that we each have 5 serious mutations in our individual genetic codes, that won't cause many of us , any problems at all. As that becomes more widely known I think we will be more cautious. People tend to think of the genetic code as a blueprint , that once we have the readout , we'll know wha tto do with it. But you need the readout , with the family history and signs and symptoms , to interpret it. Those 2 really need to go together , that is the thing that is not widely understood. That message is one of the key messages we ned to try and got out there more. As far as your ethics panel. Thinking of the Angelina Jolie case of breasts and ovaries removed and now her marriage has broken down. Not necessarily related , but there could be psyschological factors coming in .? We shouldn't go there . For anyone , at an every day level, its a big responsibility , as their lives are being altered? And also on the level , people tending to think , that having a breast removed as just a boob job, 10 a penny. But having your breasts removed , as risk-reducing surgery , has about a 30% complication rate , and people don't quite hear that bit, they just want rid of them. You don't know what other hormone effects there may be for the rest of the body. You cant have a total clear-out , there will always be a bit of your body that is at risk and you cant remove it all. Is there any effort to educate the public as a lot of this is about expectations, and not really from knowledge? Things like this talk is something like we should be doing more of. We must not sit in our labs or ivory towers just saying this. We need to go out to engage the public . I think the group behind the GP , Genomics England , they have tried hard to engage more . There is criticisms of them - a great big juggernaut , moving in a clumsy way . Its a bit conflicted as on one side it wants to recruit lots of people , sell its wares and at the same time urge caution ,an uncomfortable mix. The story we get about genetics from the headlines is not realistic, we need to find other ways to get that story more realistic. The hype about gentics has not died down. Usually with a new medical developement , a lot of hype and then it dies down. The reason behind the genetics hype is because the technology has kept on faster and we will get the answer with teh ext bit of kit . There is something about genetics that is different I suppose. 23&me they ask a lot of preliminary questions and then feed back to you those questions and replies as answers. A number of studies into those direct to consumers companies . If you send the same DNA sample to different companies you get different results back. Worse than that, if you send same samples to different companies along with a description to one as being a young fit woman , and the other as an overweight elederly woman , you get very different results back , as they use that , to make their predictions. B27

Monday 08 May , Dr Thomas Kluyver, Soton Uni : The Southampton Sailing Robot Project 26 people , 1.5hr I was asked to get involved with a robotic sailing project as I'd done a bit of sailing and I liked fiddling with computers. 9 months after that I was on the way to Satanstead airport , to fly to Portugal for the World Robotic Sailing Championship , 2016, a whole lot of fun. With me tonight are Tony and Sim who were also part of the team, and a number of other people , 9 in total and 7 of us went to Portugal. Of the 9 in our team , all were of different nationalities . The first thing you need is a boat. We initially thought we'd build a boat, but that is difficult and time-consuming. There is a community who do remote control sailing , including a class called the 1m class, 1m long . Plenty of these already made and we bought a secod-hand one, for something like 200GBp. There are 3 sets of sails, for different wind conditions, smallest for strongest wind and smallest for weak winds. Nice thing about an r/c sailing boat , it already has the servo motors to move sails and rudder , a chunk of the work already done for us. So the bits are radio receiver with aerial , about an inch long. 2 servos, the one with the large round bit is the sail servo, pulling the sail in to the boat centre , then a more standard servo that turns the rudder. Then you need a computer to control the robots. So a Raspberry Pi, a tiny computer 2x3 inches with processor , memory plus a removable memory card for the programs. You can connect it up to a network , no screen or keyboard but once its connected to a network we can talk to it from standard computers, move the programs onto it, get data off it, tell it what to do next. A lot of exposed electronics, which don't mix with water , especially salt water where the competion is held. So Tupperware boxes to keep most of the water out of it. Wires through holes made in the box to the servos and sensors, holes sealed with gummy stuff. We roughly cut a large hole in the hull side , so we could slot the computer inside . When sailing the joins to that panel covered in tape , as waterproofing. And more tape over other places where water could get in. The brain of the operation got called Brian. The boat is called the Black Python , like the Pirates of the Caribean , Black Pearl, but the computer language we use is Python. For the sensors we made one from an off the shelf windvane, glued to 2 ring shaped magnets to it, coupled to a board that senses magnetic fields so we can detect what orientation the magnets are in , and so which direction the wind is blowing across the boat. Mounted on the top of the mast about 2m obove the deck, to avoid the sensed wind being distorted by the sails. Under the hull is a weighty blade like keel to counterbalance the lean from wind pressure on the sails ,across the hull. We need a GPS . Al lthe competion challenges require negotiating around marker bouys that we are given GPS co-ordinates, Lat and Long. The boat needs to know where it is , to go to where it needs to go. This GPS is also on the mast, about 2 inches long , this one otherwise used for high altitude ballooning , apparently that type works well for this sort of app. Cost is only something like 7 or 8 GBP. A compass to show which way the boat is pointing, an accelerometer to tell if the boat is leaning and you need that to adjust the compass, a board about an inch square, MEMS micro-electro-mechanical sensors. A way to get physical data into a form that can be electronically processed. We have to calibrate the compass for each use, 2 people holding the boat and turning it in a circle, the calibration dance. Now how do we put the bits together and make it sail. Why is this an interseting challenge, why difficult for a robot. There are othe rchallenges where the boat has motors, there is a Soton team called hydro-team , boats with motors. So the control to go from A to B is pretty straightforward. Point it in hte right direction and tell it to go. For sailing , it is dependent on the wind direction , it can't go straight into the wind, 90 degrees to the wind or have it behind and modern boats can go 45 deg to the wind. A better boat will let you go closer to the wind. You have to zig-zag to go into the wind, tacking. 45 deg to the wind one way , go about 90 deg across the wind and sail on the opposite tack . Eventually you get where you need to go. This is where control of the sail position comes in . Running in front of the wind, you can let the sail out as far as it will go , near enough. It acts as a big bag, like Viking long ships with a big square sail . These boats go fastest at about 90 deg to the wind by putting the sails at about 45 deg, then the sail is acting like an aerofoil , like the wing of a plane. The wind is perturbed around the curves and pushes the boat forwards efficiently. If you want to sail close to the wind , you pull the sail closer in , it will keep you going forwards. This is partly the function of the keel . If going across the wind , you don't want to drift in the direction of the wind, sideways. The keel blade helps to keep you straight. We did most of our tests at Eastleigh Lakes near the airport. We also borrowed anothe rboat , to test out our control systems before our competition boat was ready. That boat was simpler with 1 sail, our main boat has 2, mainsail and jib. Some boats the sails are controlled separately but our boat , the 2 sails are controlled from 1 servo, they move together. There is asome ability to adjust them separately , for when we set-up the boat , we can adjust where the sheets are connected . Once its on the water they go togehter. We ended up with 2 borrowed r/c Lasers and we could test the r/c on one and the control systems on the other. A big pole has a wifi antenna on the top so it gets us better range and stay in contact with the boat during testing. Requiring keeping rain and sun off the control laptop. The antenna is very directional , requiring it being pointed to the boat , which can get a bit tedious. During the competition , this kind of contact was intermittant , we were trying to keep a wifi connection from the bank but the challenge area was several 100m away from us and a dodgey connection. But the boat does not actually need the wifi connection , it was just for us to know what was going on onboard. Once we set it going its then totally autonomous. We have the original r/c system still in place as an override. Also , in the competition, you are allowed someone in a chase boat who could intervene if things go wrong, like crashing into something , other than tht you have to let it do its own thing. We used open source stuff for the Pi, the Python language our source code in GitHub, a www repository of it contains all our code , and you can see what we're doing wrong. The key component, to make it work is ROS, the Robot Operating System, the version we used was Indigo Turtle with ? shell . ROS principle is there are nodes which are separate programs , running things, and they talk to each other. One program just controls the servo motor, one that just gets data from the wind sensor, they send out ROS messages which the other programs can listen to . It makes it easier to separate out the bits needed for the robot, so if the compass bit crashes then ROS knows how to restart that , so it does not mean everything has crashed, just that one part has crashed. A lot of people program robots and end up re-writing everything from scratch, so ROS means you have pre-written bits which can be shared between multiple robots . So if we've written something that works particularly well, say determining how to tack up-wind, then someone else could use that and plug it in to their own sensors and things, which might use data in a different format. But a standardised interface that lets you take and combine different bits of code. Q: With the sail servo , do you have a position sensor for sensing the sail position? At present we only know how much sheet there is in or out. So if the servo drives to one extreme, you don't know about that until other things start happening? Yes, the wind sensor we can see which way the wind is coming from but in the edge cases , where you jibe, then you don't know exactly how the sail is set. ROS lets you define the launch file , which tells all the nodes that we want to start. There are different launch files for testing of calibration of sensors and then for actual sailing in ernest. There are parameter files , containing settings for different sets of sails , for different courses . The co-ordinates we want to go to are programmed in , via the parameter files . ROS makes easy ,the monitoring and the analysis. Al lvery well putting the boat in the water and try nd make it work , but often when you are sailing , there is no time to work out what is going wrong. Without that, when you get the boat back , you would not have the details of during the error performance, you would then only have the memory contens of the boat turning round in weird circles and would then hacve to try and piece together , what the system was doing that made that happen. ROS has a bunch of useful stuff to give you more info about what the boat was doing , centred around the tech calles ROS-Bag ? a way of recording all the messages of the different parts of the boat are saying to each other , wind is 20deg, compass is 170deg at a particular moment, all recorded on the Pi . Then we get the boat back , we can pull that data off and plug it into various things to analyse it, tools like ARCU2? that can show us plots of angles over time , a map of where the boat is, relative to markers for the course. We also wrote our own stuff to help with theis , an HTML dashboard , a live view of what was going on the boat, on computer or a phone. This was helpfull a couple of times in the challenges, people in the chase boat could pull out their smart phone , connect to the boats wifi , and view some of the key parameters from thr boat. We also wrote a set of ROS nodes for simulating what the boat was doing , so we could test the boat code without having to place it on water every time. The nodes for the boat itself take the inputs of where the boat is, what the wind is doing, and polls? output of what the boat should do now. The simulation nodes can complete the circle. Take the input of what the boat wants to do now , and then update the posistion and heading of the boat. In the simulation we make the wind non constant , as happens a lot in reality , which makes sailing so much more confusing than the simple diagrams of wind just from one direction, constantly. A map view video of the waypoints the boat was going through, from the recording of Rosbag messages along with other data to work out what the boat was doing at that point, why it was nit doing what we expected it to do. Sept 2016 in Viana del Costelo ? in north Portugal. The River Lima? with a bridge designed by the designer of the Eifel Tower. We launched from the bankside. There were 12 teams , 2 classes ours was in the micro-sailboat class which is up to 1.5m long, 7 teams in our class, 5 teams in bigger boats up to 5m tending to be from Spain and Portugal as large boats. The competition has been going for a number of years, moving each year. 2015 it was ? islands between Finland and Sweden and 2017 will be in Norway I think. Theya ske dus if we'd like to host it in 2017 but we felt we could not arrange that in time. There are no physical bouys to collide with and we have no collision detection on board. In the first day they all sail together . You are allowed momentarily to take control via r/c to avoid a crash. 5 days of sailing, the first day was for testing, getting used to local conditions. We discovered that waterproofing is difficult, electronics and saltwater don't mix well. The part that switches between automatic control and the r/c , liuckily we brought 2 of those as the first got destroyed by saltwater . We aquired some sanitary towels in Portugal to soak up excess water in the hull, which worked well incidently. The box duct-tped to the outside of the hull , that contains the competition's own GPS tracker, for a separate log of the boat track, so they can score the competition. The first race was to go round 4 marker s , the quickest to go round al l4 , would get the highest score. Not for us though. The second day was station marking, just 1 marker and stay as close to the marker as possible for 5 minutes. This sounds easy on land but whan sailing there is no stop for a sailing boat, always blown by the wind and pushed by the current . THere was a very strong current in this river, we found on the test day. You have to keep moving to stay in one place, like Alice in Wonderland. Q: Any detection for current? No , the boat judges where it should be going and judge it by that. Q: you don't get it via the gPS system? You can try and work out I'm not going where I think i'm going . We did not do that as a lot of other things to do, but you could in theory pick up some measure of the tide via the GPS. The third day was a grid search an L-shaped grid of 27 boxes and we had to get into as many of the boxes as possible. The fourth day was a collision avoidance day, going back and forth along a narrow course ajd at some point they towed a line of big orange buoys across the middle of the course. The boat had to detect these , swerve round them , then return to course and continue. By the 5th race everyone had by now discovered the current is really strong . At the start the wind was going one way with the current as well , so very challenging . Of the 12 boats in that competition 2 boats managed to start the race and one boat managed to finish . The boat that started made it past 2 markers but failed to turn at the third. Our boat was not at all successful. Partly due to the current . 2 servo motors, one controllint the sail , one the rudder and on this day we managed to plug the rudder servo into the sail servo system and vice-versa. We had a boat that went round and round in beautiful circles , wiht a lot of exasperated humands on the bank. That was something you have no hope of diagnosing from the computer logs, because the boat is doing everything as it should, its entirely a hardware problem. As a result of that, we added some code, so when we start the boat, it wiggles the rudder in a distinctive pattern and then puts out the sail a few seconds and then all the way in for a few seconds. We never made that mistake again, but we did make other mistakes. A whiteboard i nthe clubhouse with co-ordinates written on it, lat and longitude. Most of the other boats did much like we did, lucky for us in the overall scoring. An amazing GPS log of the French boat that managed to finnish, because it did very tight zigzags all the way up a few hundred metres up one side, about 1/2 hour , then zoomed around the rest of the course. Day 2 , staying on station, that part of Portugal gets sudden fogs that turn up from the river. A new meaning for getting data out of the cloud. We were sitting on the bank, numbers coming in , but we could not see the boat at all. For the first minute stay in one small circle around the point and then 5 minutes staying within a circle that contains 95% of the track. Fiddly to work out, but done by computer. We managed to get a 25.3m radius , could enough for second place in that challenge. THe welsh team won that challenge, staying within a 5m radius. Day 3 , getting into the most squares, we managed a fair few of the squares and again good enough for second place, the Spanish team did the best there. We discovered its not a good idea to use the launch file from yesterday , as it started off in a loop to where it had been doing the position keeping , the day before. The large squares were 60x60m divided into 10m squares. Luckily they allowed us a second go, after our boat sailed off in the wrong direction, otherwise it would have been nul point for that. Day 4 , obstacle avoidance. We got a USB webcam , same as simple skiping one. Fitted it to the bow, cable running back to the boat computer . We could look at the bouys beforehand , so we knew what they would look like. We wrote a simple bit of computer vision code , that basically just counted how many pixels were orange. So we had to define the range of colours for that orange. Then we had to decide the minimum number of orange pixels before it decided to avoid. For the camera we had brought, was not suited to outdoors bright Portugese sun , getting very washed out pics from it, the solution was a trip to a supermarket for some cheap sunglasses , popped out a lens and selotaped it over the lens. It worked. The camera was put in a plastic bag to keep the water off. It sailed back and forth along 150m course, they towed the bouys into the path of our boat . We were sitting on the bank, watching the dashboard , a figure saying not detected, repeating , then just about as we were to hit the buoy , detected , but unfortunately too late to swerve out of the way . The proportion of the image , to be orange , may have been set too high . Also they had told us the buoys would be in the middle 50m section of the 150m course, according to their GPS they did , but according to our GPS the bouys were to one end of the course. Our boat had just left the area we had set for it to decide whether or not to swerve. We collided with a buoy. This may not seem good but it was good enough for us to get first place in that challenge. As the course was very long and narrow, none of the other boats managed to stay in the course. It may have been pure luck that our spot was just as the tide was at low tide and the current was not pushing the boat. Q: Perhaps you should have collision avoidance for all times.? We have to go round bouys at other times and we did not want them detected by that ststem then. Perhaps we should have made the observation area more generous. GPS is very consitent with the same unit but there was questionable GPS reading between our GPS reading and their GPS reading. After all that we managed to get a win in our class. If you did not managae a valid run on a given day you got 8 points, 7 boats in a class plus1. First place gets 1 point , second 2 and so on and the lowest score wins. So getting 3 valid entries we managed to come in first over all, which we all were surprised by, as none of us had done this sort of comp[etion before. We learned a lot doing this, had a lot of fun . My take-away from this is , reliability beats performance. That your boat works is better to focus on , than make it work well. It did not do any of the challenges brilliantly, but it do all those challenges, which was a lot of the scoring. Lots of really simple things can go wrong, plug the wrong thing into the wrong thing, you can use the wrong file misdirecting it, water getting in because you've not sealed it well enough. We had one day when the boat was doing something funny, the chase boat went after it , to pick it out of the water and it was noticeably heavier because it was full of water. This was a spare time project for all of us, we all work on other things at the uni. My work is software, programming stuff and this project brought home to me how challenging it is doing hardware stuff. A whole new array of things that can go wrong when dealing with hardware, that otherwise a computer deals with . We will be doing it again , we returned the boat to the water for the first time since the competition only a few weeks ago with lots of ideas on how to make it go better. Q&A Are you using the heel sensor data for anything? It does get published but its not of much direct use. It does get used in the same nodes that publish it, because the compass data requires it. Do you have any plans for optimising for the wave condsitions? Its not something we've done anything with yet, something we wish to investigate more. Particulrly in choppy waves , a sa small boat doesn't have much momentum it finds it difficult to tack. As soon as it turns into the wind it looses momentum , loses steerage. So we have some ideas for optimising by tacking on the down slope of the waves. I was thinking of sailing freer, sails farther out and lower? , to pick up speed before tacking? We've not thought about that. Making the boat longer would help with this. A physical solutiion to this sort of problem is often better than some smart algorithm. If you truly roboticed it , you'd do as a human would do and you'd set the sails farther out , to pick up speed, so the VMG? would be the same possibly as going furthe r, off the wind. This happens in small human crewed boats? We don't have very accurate velocities, just based on the GPS. So we need a way to integrate the GPS sensor to something loike accelerometers, to work out accurate velocity feedback. Then that would be possible, a good idea. How to tell if the sea is choppy or not , maybe a camera system, certainly anothe r sensor required. THen experimenting between human observations and trial runs to find correlations. Do you have different sets of polar diagrams for different sets of conditions? No . When your tacking up wind , how do you decide on lots of short tacks or longer tacks, based on how far you are waway from your straight line course, or set distances maybe? The initial thinking was to detect the ley lines. This is where we found the changing wind direction makes it trickey. The wind changes and the boat thinks the ley lines have swung out 90 degrees, so we have some code that tries to average the wind direction. As we get closer to the waypoint we are trying to go to, we have a thing called tackvoting? cuts in , so rathe rthan considering am I past the ley line at this moment it keeps a 10 second rolling count , sampled every 1/10 second, did I think I was over the ley line and ready to turn. Once that number hits 75 then it will turn , which has the nice side effect then once its doing that, then it wont turn more often than every 7.5 seconds , because you want some gap betweeen your tacks, to let it build up a bit of speed. Do you know the French boat did this, lots of little tacks? They were using a vector-field approach , a vector flow approach. They set up some kind of virtual obstacle and a point of attraction , and between those you can work out optimum fields, and point out the direction you want to sail. Hence that team doing lots of small tacks all the way round, artificial potential theory, well accepted in general robotics control) gives that. The french boat was in a different class , about 1.6m long , super light , which means it can easily tack in difficult situations. The net effect was their boat stuck much closer to the line betweeen waypoints. Do you have a sensor for how much the boat heels over? Yes, from the accelerometers , sensing gravity. At the moment its only used to corrdct the compass. Are you allowd a second remote sensing sytem , in the water to detect tidal current and transmit that to the boat? I don't think in the rules there is outlawed remote sensors on the shore or whatever. I don't think any team are doing that. We did not have a reliable radio link either. Could you use a parabolic dish rather than the usual wifi thing? I don't kow what the internal geometry inside the white box , it is long range and highl;y directional , but the range was not enough for reliablity. For your servo systems do you have something more subtle than the normal proportional control, a suck it and see a marginal shift , to test out and then back off , someting more sophisticated as you have a computer on board? No, the servo control is the standard pulse-width modulation . One thing we've been thinking about is , a human sailor will look a tthe sail and if he sees it fluttering , you need to pull in a bit more . Could we have a vibration sensor mounted on the sail itself , also measuring the tension in the sheet . So you are sailing at a specific angle to a relative wind rather than looking for the point of flapping/luffing point. So sailing at a conservative 45 degrees say instead? At the moment just a hard coated? angle , a hard-coated table of if the relative wind is 90 degree then the sail angle is x and it adjusts within the angles it knows about. Did you ever get involved with strategies of stealing other boats wind or that sort of thing? No ther ewas 1 day of compete racing scheduled, all tyhe other challenges were individual boats at a time. Even at the fleet race, no one was at the point of being capable of stealing anothe rboats wind, just going in the right direction was quite enough. Is it the same challenges each year? Similar each year but not the same. The organisers a the site get to organise what the challenges are. The computer vision challenge with bouys was new last year, replacing a challenge from the previous year that involved collecting data from added sensors on the boat. You know aboout the challenges before the event? Yes, we could practise them beforehand. So you would know in advance they would be orange bouys? We could do calibration with the Go-pro with real objects on the water a tthe site, to check our coding recognoises the object. How many algorithms have you got running? Each line with a node is one bit running, so 15 to 20 things running . That is the tasks, but the algorithms to interact , data from multiple sensors , an algorithm to manage that? Each sensor has its own thing pulling the data from it, there is really only one core algorithm that is deciding where to go next essentially. So one algorithm taking all the dat ain and deciding how to set the sails at any 1 point in time? Yes, try to go in this heading and the sail control goes separately, keeping track of the relative wind. Its smart enough to know it cant go directly into the wind and different tasks that can switch in, what should the boat be doing now . A different bit of code for the obstacle avoidance for example. They always choose tidal rivers and not nice quiet reservoirs? The previos year it was in the Baltic Sea,next year will be in Norway presumably a fiord. If you don't use the heeling sensor , could you not turn it 90 degrees and use it as a pitching sensor. ? The accelerometer is 3-axis so we have pitch as well. So pitch and roll off that , not yaw. It gives 3 acceleration readings and we convert that to pitch and roll. How long did you spend on the project? We started in Jan 2016 and the competition was in Sept. We had meetings 1 evening a week and occassional weekend day of working on it or testing it. Apart from being a bit of fun , is there anything to be learnt for sailing in general.? A challenge called Microtransit , a boat smaller than 2.4m , must be wind-powered to sail from the UK to the USA . Loads of people try it every year , but no successes yet. If we got a boat like 2.4m , what can we di with it. We can collect environmental data , monitor sea levels, check water quality and waves across the atlantic. Wind energy is virtually unlimited , no fossil fuel consumed. We're not advancing humanity at the present stage bu tin the longer term, we open source all the projects, to anybody interested . We could monitor fish populations, water quality , that sort of thing. It would be interesting and cost effective. All our kit costs are second hand boat about 200 quid and all the electronics add up to no more than 100, the Raspberry Pi at 30 quid is hte most expensive bit, all hobbiest sort of stuff. Are you satisfied with the data from entry-level kit? We've stared with envy at a much higher quality accelerometer on display at Ocean Business at the NOC recently. All sorts of hitec gizmos. A lot of the stuff we are happy with . We are currently trying to integrate the GPS and accelerometer so we have a speed reading. Do all the teams share their data and ideas?, perhaps binocular or sonar for instance? One team was doing a sonar thing, not underwater but ultrasonic in air . Sonar under water there is so many reflections . Most teams were like us with a camera on the front. Do the rules permit wing-masts and hydrofoils? I think the rule was anything as long as it was powered by the wind. A wing-mast might be simpler to control than a pair of sails, a double-sided sail wrapped around the mast? A couple of teams did wing-sails, so allowed. A Flechner Rotor type thing that required mechanical power to rotate the rotor , to then grab wind energy, would not be allowed? You're allowed a linkage from a wind-capturing something like a propellor, as long as the only source of motive power is the wind. Have you any contacts with the big-boy autonomous , huge trading/cargo sailing ships that are just coming off the drawing boards, multi-mast and huge sail arrays but just 1 human on board ? Wherever there is reliable trade winds around the world.? We must share something in common, in the way of the control systems , but have no direct contact. You've not found any use in conformal coatings over the electronic gizmo boards, just waterproof boxes? We did use Plastidip on some of the electronic boards that gives it a kind of waterproof coating. More recently we were told of stuff called Magic-Gel. You put your electronics inside a box , fill it with t he gel , goes solid and is not conductive. Its a bit like Argo-Floats and immersing all the electronics in oil , so nowhere for the water to get to. Wondered if you hada problem with consendation as much as seawater problems? We've not had condensation problems. Do the organisers allow you to see their GPS system beforehand, as you said yours and theirs were different? A fixed offset all the time or varying? There was nothing secret about their GPS. We didn't get to look into it as their boxes were taped shut. As there was something like 30m difference between the two in the collision avoidance challenge, it may be sensible to place their system and ours in one position , prior to the race next year, to check for any offset. Were the grids the same for the different classes? I think the bigger classes had bigger grids to search, 20x20m grids , we had 10x10m boxes. We're the bigger boats better at the tasks? In some tasks yes, not necessarily due to the size of the boats. Teams bringing bigger boats were possibly better resourced, more experienced , like the French team. Generally the bigger boats wwere better at picking up speed before tacking and the speed is relativw to the boat size. The Froude number is much larger for the larger boats. What are the challenges to get one of these such boats to cross the Atlantic, just funding for a more robust boat or? Getting a tiny boat cross the sea has many problems. We know of a boat being kept by some fishermen, another attacked by a shark. Some the servos did not last even 24 hours, because of severe sea conditions. Waves of 7 or 8m with a boat that is only 2m , not nice. There isa team near London that launched a mircotransit attempt from this area , got into the channel but with the tides and things it never got out of the channel, just being pushed back and forth, and eventually washed up on shore. You need the endurance of power for the computers as well. We currently use a USB power-bank that would otherwise be used to boost a mobile-phone power, works well. Alsoa set of AA batteries for the servos. The Microtransit boys have solar panels on theirs and batteries so it doesn't die in the night. Just the ability to keep going without stuff breaking , for the lenght of time involved and make headway against wind and tide and big waves. A number of teams start out each year, west and east going across the Atlantic and so far no success. How does the robotic control fare against human control, say via a joystick? When everything is going smoothly , then the robotic is comparable to someone without much experience of r/c sailing. A good r/c sailor could always beat our boat.

Monday 12 June , Dr Roeland de Kat, Soton Uni : Forces and turbulence in avian flight . 27 people, 1.5 hours Over about 10years I've done bits and pieces on avian flight. Today I'll talk on forces and turbulence. A lot of this work has been done with David Lanthing? who now has his own lab at Stanford. I'll squeeze in some Par-avian flight and finish with avian turbulence. The main reason I'm into this , is because these little creatures are amazing. You see them flash by and you don't fully appreciate what is going on . I spend a day with a high-speed camera chasing gulls on Southampton Common. A 400 fps video of one in slo-mo, flares off, stops in mid-air, drops down seeing something I could not see , takes a fish in the beak and goes straight up and out. A lot of things going on there and a lot is beyond my expertise, thata why we need to work with different types of people. My background is aero-space enginering and so I can figure out some of the elements of its flight. A pic of a swift, amazing fliers. David Lanthing picked the swift because of its intermittant flapping and level flight. As soon as you see something flapping, engineers and biologists say thats way too difficult. So we need to know a bit more about what is happening. One thing David observed, in seeing them fly , they change their wings, have them spread out or swept back. I was doing my masters in Delft and David asked me to work on this, swift flight. So we looked at morphing wings, how they control the glide performance of swifts. What are the forces that act on the wings , how the forces change as they change wing shap[e and what does that mean for flight. We both had engineering backgrounds is the gait?, thats not right 5 degrees, 50 deg , its meant to be 0 degrees, 60 degrees. But somewhere in the process of removing the body of the bird, freeze-drying the wings, the wings did something by themselves. When we put the wings in the freeze drier , they thought they had them at 0, 15,30,45,60 degrees, but we needed to quantify it. So part of the wrist , the sweep angle and that changes. We don't always get what we want. A colleague a true biologist , we said to her this is not true 0 degrees , she said its within 15%, that is biological variation it explains everything. We went about quantifying different sweep angles and a few other things we care about when talking about aircraft and flight. The aspect ratio , generally linked to how efficiently the wing performs and the wing area as the bigger then the more lift produced. Classically these are just parameters; you pick, you set and then forget about it and design your aircraft. Our flight machine includes multple wing areas , multiple aspect ratios . The first tests were forces, classically drag and lift , we want he highest lift possible for the lowest drag. If you want to compare a swept back wing to a straight wing , one bird , one wing , can change it. So instead of going at it , like an engineer and normalising everything into non-dimensional things , we need to take into account , it can change its wing area. If you put the wing area back into the equation , then you see the differences between the different stances becomes much larger. The envelope plot , going from wings straight all the way to swept back. If you change the flow velocity , keeping the medium (what is flown through) the same, keeping the size the same , change of velocity changes the Reynaulds Number, the paramete rthat tells you how difficult it is to deal with the flow. The higher the Reynolds number, the more complex the flow gets. The lower the number , less complex the flow gets. If you add particles it may get more complex again , one of my colleagues work there. Adding this in , we have to take into account , this occurs in a flow regime , where things change. They change from lamina to turbulent . If we add those different velocities int play, 5metres per s to 30mps, cranking up the wind in the wind tunnel, the dashed envelope line , changes further. We can scan the parameter space of what we expected these birds could do. A bunch of numbers, that don't necessarily mean anything . We took those numbers and put them into a glide model. They are flying in different poses and we know trhey are fully balanced. We have the lift and the drag from our equations , an estimate for the weight , then say if it flew with this velocity what can it do, and at a different velocity what it can do. Not just a glide but what if it flies in a spiral. Swifts swirling across streets , they glide and like to turn rapidly as well, quickly changing their sweep angle. Generally in aerospace engineering , we ignore a few things. Generally we say gamma is small , a small angle, so we can neglect a whole load of terms. But we needed this , to fully describe bird flight. Equations , looking at performance indicators, how far forwards can it fly with a 1m drop. Or 1m down , what is the slowest I can go down. The sink velocity , the glide ratio. Maybe it wants to escape a predator, maybe you want the ground speed to be the highest. Then some turning velocities and performances as well. Whats the largest turning angle we can get per m of descent, what is the tightest turn we can make , wha tis the quickest we can turn. Below 45 degrees , gacefully falling , about 45 degrees gliding ? flight. An easy cut-off , 45 degrees. Most follow the same trend efficency peak at the lower velocities. Anything more on efficacies or group-power can peak at the higher velocities. Looking at the glide ratio, we can already see a few interesting things. Peak is exactly where we expected. At undergrad level aerospace , for straight wing, highest aspect ratio , is the best. Glide ratio of about 11 . Estimates for albatroses don't go much higher , 15 or so. So a pretty good glider. As we increase velocity , straight wing is not the best any more. Initially this was a surprise, why is that happening. Looking a bit closer into this, that is what we could expect. If you take into account , the area of the wing, wing area changes. Return to the curve with high lift, low drag, that needs to balance with its weight in flight. Increase velocity , the coefficient goes down , poorer performance. As we go to swept back wings , performance gets better again . Performance improves at higher velocity purely due to the area change. Change your area, you can stay at the better performing part. Q: Does the angle of the wing vary over the wingspan as well, in this change? Likely. But thats an additional challenge, not included in this presentation. We looked at the deflection of the wings, from the rear, at different velocities and they do deflect a lot. Lots of pics show that swift wings a pretty well planks as wings , not a lot of twist present. With our prepared wings, the twist was not big enough to quantify. Q: When you say whan the aspect ratio is very high , it changes the wing area ? Wing area plays role as well Q: But on an aircraft , wing area is always the same, irrespective of sweeping the wings back , so why is this different? THe feathers overlap , when they start overlapping more , the area changes. The benefit is, instead of having the small difference between straight and swept back wing, here there is a huge difference because area plays a role. At equilibrium balance at about 7mps, if it goes faster and faster , there isa curve that says , equilibrium is lift^2 + drag^2 = weight^2. When that moves down , it shows how swept back wings perform better at higher velocities. But does it really fly at those speeds. We tok data from a different study Beckman & Alistahn? , the most probable flight velocities, the most observed flight velocities. The velocity swifts are recoreded at , fall in this range , and fall in the range of all the efficiency parameters, not the efficacy parameters. A lot of measurements , a lot of observing a wing in a wind tunnel doing nothing at all, for about 2.5 weeks, 9am to 3am, very tiring . Now for the vortices. If we take these wings , placed in a wind tunnel, what I found intriguing , sold me on it, we have leading-edge vortices. Before I started my internship 2004, showed on a model wing with vortices. But engineers would say in the 1950s we had such vortices on swept back wings. Something else with wings may have a role, they are porous. So do the LEVs make the wing more efficient. We need a way to capture them and measure them. So took a tin can, cut a hole, weld something into it, take a cigar, put it in , high pressure air added . have a rake on the other end, hold the rake in front of the wing, puff the smoke and you can see the vortex. That failed miserably because cigar smoke is very moist , a lot of tar, so imediately clogged up the tubing. Trying loads of things, took a tuft of my hair and we used that to visualise the flow. If you see rotation, then the flow is pushing my hair around, ie a vortex. You can follow it into the tip vortex , the key is it started turning in the position of a LEV. So we logged , moving of the hair around and whether we saw rotation or not. Pictures of where there was a cone and not a cone, and showing , for the rake used, it does not create it by that and what we were doing. So there were LEVs but not present in any of the cases where there was peak efficiency. So wherever the swift flies most often , there is no LEVs. Where it did show up and where 1950s engineers designed it around as well , you have increased efficacy. A peak lift that you can create , you can turn very quickly . If in a dog-fight or chasing insects , and the target goes off in one direction , you need to go after it. Thats where it comes into use, where they use their LEVs. We've only touched the tip of the iceberg of research into bird flight. A lot of current research into capturing what living things do , and build them ourselves. I moved to Soton, to look at turbulent boundary layers and develop experimental techniques. The first thing I looked at was a feathered dinosaur. So a small dinosaur and similar approach to the swift research. Take a model, place in a wind tunnel , get forces, make predictions as to what it could do. From the fossil record, such creatures are flattened out, feather material . So a crow of its day, irridescent feathers , but could it fly, how well did it fly. Some previous researches CL of 1 sounds good , glide ratio of 15 sounds good, combine and a very good flier. So a colleague Colin Palmer found a pigeon in his yard, bought a duck and created a model. A long tail, feathers on the legs, and feathers on the wings. Whats this with hte legs. Paleantologists bamboozled, perhaps everything was spread out. Some people whan young can put their feet behind their head, projected reasonable extremes, given the bones, what it could do. Legs sprawleded or legs down and asked does it fly. We put it upside down , because the balance in the wind tunnel is at the top . 2.5 x 1.5 m with the animal in there of about .6m span. Something at the rear thtat pushes it up and down, changes the angle of attack , little weights and servos to capture the forces. Lift and drag as before , but this time we add the moment. With centre of gravity and moment at non-zero , it will rotate. We wanted to make surre that whatever we say it can do, as it doesn't turn like a leaf and roll or flutter down. So the moment about the cofg needs to be 0. We tried to measur ethe moment with the earlier swift work but we failed. Luckily there are plenty of accounts that swifts can fly and do so without their tails spread, in the vast majority of poses. For swifts we could ignore the tail and it was fine, the findings not affected by not having a moment. Other researchers avoided this also, saying there were various points it could fly. We accounted for it , with speed specific dynamic force , which is basically the total force and then splitting into lift and drag. The glide ratio, speed specific moment, and regions where it could fly and some areas where it is not stable. If it moves up, makes the moment larger , keps moving up and you get a confetti effect. To fly there it would need a big brain, which is up for debate. Elsewhere it is stable , not requiring a brain, and just jump out of a tree and fly. So jump out of a tree, make your pose and see what happens. With fairly simple assumptions you get to glide paths. Initially showing legs down is clearly better than legs sprawled. Then the engineer comes in and maybe it could move its arms back and forth, so we need to account for that. We modelled that by saying it could move its cofg wrt the lifting surfaces. That tilts/ shifts the moment up and down . As the big/small brain debate is still ongoing, we just say there is an unstable part and a stable part. If small brain and not a good flier , it can fly in that section. With big brain, advanced controls , it can fly there as well. Stable areas, no thinking required, areas where it could glide but needs to work hard. Compare the 2 plots and its not that different. Jump out of a 30m tree , glide-path, if you want to go farther with legs sprawled, we think it hasa bigger liufting surface. Go to the side, with the wing feathers, it goes up. I started loking at turbulent boundary layers TBL on bird wings. A flat plate is boring com[ared to a bird wing. We thought behind this was some meaning as to how flight evolved in feathered creatures. A wing is not a flat plate, the feathers overlap, and in the overlapping there is a roughness. Feathers are not all smooth or not flat at the top. To get a good force out of an aerofoil , first year students are told it needs to be flat, a sealed plane, flattness 0.000 something % flatness. So how does the roughness work here. The lakidys? with the veins around it. Shone a laser with lens in front, take a picture and get a cross-section. Fairly smooth and as we move outwards, gradually gets more corrugated, looks more like a dragonfly wing than bird wing. They fly much faster than dragonflies , a different flow regime. Measure them, abstract the average curvature , because it has nothing to do with the surface roughness, colour code the height , then compare that to the average chord, about 37.5mm , peak to peak it is 0.8mm , 2% of the chord instead of those zeros something %. So very rough, it must have an effect on the flow, must be fully turbulent. So wing in the wind tunnel , to find where it is turbulent flow and where it is lamina flow, skip the bit in the midddle as too difficult to deal with, but there is something other than those 2 flow regimes. If we used a hot-wire as for measuring velocities, it would probably cut the wing. Use smoke and you might create a wet wing, very different to dry birds. The best thin we came up with was a microphone, with a very long tube on it. Build it right and its only sensitive to pressure fluctuations at the tip of the tube. Traversing the wing , patches of single pitch noise , not turbulent. The hissing sound is lamina separation . So we move our listening tube along the wing to find whare the sound changes. It does not change randomly as turbulence is quite well defined i na broad band signal. The tonal noise we were not sure what to with that , I'll come to. Put it in Fourier transform analyser and you getr what frequencies in there, high low, broad or nothing. You look for where there is extreme change, that is where we go from lamina to turbulent. Mark with dots where the changes are, do the same thing for 4 angles of attack, times 3 wings. We take 0 , half way and maximum lift to drag, which is whare its interesting. Also where there is maximum lift , or stall , whare you'd expect changes to happen. So we locate where the changes are, but thats not where the roughness is. Where the roughness is, no transition . Everyone tells you , going through aeronautics, wheere its rough there is transition to turbulent flow, the worst thing you can do to your aircraft. Swifts don't care about this , they just do their own thing. Flow is not changing instantaneously , turbulence is not something , clicks over now lamina, now turbulent. There is a transition process, in one place with a certain Reynaulds Number, transition may only occur later on. So an area with lamina flow , and whaet that means for the bird. For different angles , its primarily lamina. The peak is almost 75% lamian , or at least non-turbulent, where it wiull perform its best. Thats where it flies most , unsurprisingly. How do we know its not just a thing with swifts. How do we know its the roughnes, by testing. The swift wing , with calipers trying to find where the ridges and valleys are, then I used a laser scan. We built a wing with thin pieces of tape attached. It has one width and tibs and one without. We used the listening tube again on this. The rough has more lamina area than the smooth, low reynolds numbers. As we increase velocity we get back to normal. Big aircraft with big Reynolds numbers, flying fast. Aswift isa small bird , fast in bird terms but slow in airliner terms. Right about the range where these birds fly , rough wings are not bad, in terms of lamina area. What does that mean in terms of performance. Mor elamina area means better performance, not necessarily. Our wing, lower reynolds we do get bettewr performance. What is going on in the flow, we still needed an answer. A masters student in Delft took the roughest lines measured profiles of the wing, averaged them , measured under a microscope the leading edge radius of one of the feathers , estimated the thickness, to produce a 3D printed model. Then remove the roughness, make a smooth model, 3D print it . Now anything you machined or 3D modelled will be as you intended it to be. Its not far off. Placed in a water tunnel. Luckily a low velocities , air behaves as water , as long as you match the reynolds numbers. With water, the forces are 4 to 5 times higher, so easier to measure, or so we thought. 3 cameras looking at it, placed a load of particles in there , watch where they go and we get velocit y fields. We tested 3 angles of attack, for 4 different reynolds numbers and the range where we expected changes to happen. Not much obvious differences. So w etook snapshots , summin g and dividing by n . Looking at vorticity, so rotating 1 way or the other, or shear. Low angles of attack nothing there, intermediate angles a little hint of something there, high angle of attack , definitely something going on . Vorticity is not too natural a thing to look at. We zoomed into small area and looked at vector representations, much easier to interpret. The vortices that get induced are the tonal bursts you heard on the video recording. Vortices move , they're periodic. In the global view we might not see it. In aeronautics we reduce it to a bunch of numbers, describing what the flow is doing. Boundary layer grows as it goes over an aerofoil , it has a shape and it can be quantified in multiple ways. Generally we pick boundary layer thickness, about 99% of the external flow. Then we can determine how much velocity you would need , rathe rthan a curve , you make it a straight line, the displacement thickness. Then you can work on how much energy is lost. With the boundary layers , you get an inflection , a separated flow , flow is not attached for poor performance. There ar efluctuations around the average profile. Boundary layer thickness. not much difference rough or smooth. The shape factor , one of the indicators of whether flow is turbulent , lamina or separated. The bigger wake is generally indicative of a separated regio n as well . For the rough, we get vortex trains for intermediate angles of attack where peak performance is. Low angles of attack, no vortices , lamina flow over a rough wing. High reynolds nubers, they both go turbulent. For high angles of attack , smooth and rough wings are little different. This is where a swift wing performs and a smooth wing does not. The beginning of the wing, kicks the flow , it gets agitated and stays attached. It does not separate , it follows the wing. Although turbulent flow is bad in causing drag, its a lot better than having a separated flow as that would mean no lift. I have a PhD student also interested in feathers but he comes at it from a different direction , as a biologist. So we loooked inside feathers. Taking a swan feather to pieces , placed in a syncratron a large particle accelerator . The particles shoot through the feather and a scintilator , turning radiation into light and look at it with a microscope. Move around 40 times. The core has a patterning , but move to the outside no patterning. The material properties change. Also there are multiple layers in a feather. So beyond avian aerodynamics , one of the most complex advanced composite structures i nthe world. So the next research is trying to determine what the layers are, which way the fibres are pointing . We could then model the composite, tear it apart again, and see if we can tell something about it. Hopefully allow us to build better structures, maybe improve small-scale flying objects, wind-turbines or whatever. Q&A You said the wings were porous , so with bats where its just a membrane, would htat behaviour be more like conventional aerodynamics. ? Feathers are porous but for most lifting purposes thry're impermiable. We try to model the porosity , and we failed. Wheras swift wings use roughness to keep the flow and keep the flow attached, what membranes might do , is changing the camber, the curvature and more lift, but it vibrates more. With a vibrating membrane, it does the same thing as roughness, energises the flow, it creates vortices , keep the flow attached. A completely different phenomena but the effect is the same. How much air goes through the feathers.? There are people that look at the permitivity, its very little. The latest model I've seen , they've tried to 3D print . From the bones you have the feathers, from the radius? you get the barbs. Optically it would seem to be porous but when you pressurise it , it tends to flap and close, and closes. People apply pressure differences , to see how much goes through. They try to model that in 3D printed wings, by creating simple holes, and they are not the answer ,as the flow goes through, resulting in fully separated flow. There is some work on the outermost primaries of storks,where there isa hole and the rest seems closed , if they close that gap with wax , the feather which is a single aerofoil. performs worse. Little holes may have jets emerging, that may keep the flow attached on top of the roughness. But that varies, species to species and so many species out there, its difficult to say, this is how feathers work. There is likely some flow coming through, its too small for us to measure. With the albatros , a high aspect ratio? Its about the same as the swift. Can the albatros change the shape of its wing, for more speed? The albatros has a little ligament in there, places it, and it locks. Without any effort it remains straight. Swifts have loose wings, and they have to force it. Different species have different mechanical solutions built in, to help with their flight behaviour. The albatros flies much faster than the swift, bigger ,so its in a place where it will not benefit from a rough wing. A barn owl flew in front and across me , one dark night, with a wingtip just 1 or 2 feet from my head, and I did not hear a thing, whats going on there , a very downy wing surface? Multiple things 4/03 13/0/44 B29 to be continued 1/19/55/03 2/12/35/12

e-mail

ncook246@gmail.co.....m  email address ( for anti-spamming reasons please remove all 5 dots ..... between co and m )
Plain text only (see below)

Please make emails plain text only , no more than 5KByte or 500 words. Anyone sending larger texts or attachments such as digital signatures, pictures etc will have them automatically deleted on the server. I will be totally unaware of this, all your email will be deleted - sorry, again blame the spammers. If you suspect problems emailing me then please try using my fastmail or my fsnet.co.uk account. If this email address fails then replace onetel.com with fastmail.fm or replace onetel.com with divdev.fsnet.co.uk part of the address and remove the 9 (fsnet one as a last resort, as only checked weekly)
keyword for searchengines , scicafshadow, scicafsoton, Southampton Science Café, Café Scientifique, scicaf, scicaf1, scicaf2 , free talks, open talks, free lectures, open lectures , http://events.dailyecho.co.uk/events/disp.asp?i=483734 , http://events.dailyecho.co.uk/events/disp.asp?i=483738 , http://events.dailyecho.co.uk/events/disp.asp?i=502699 , http://events.dailyecho.co.uk/events/disp.asp?i=508270 , http://events.dailyecho.co.uk/events/disp.asp?i=521880 , http://events.dailyecho.co.uk/events/disp.asp?i=529742 , http://events.dailyecho.co.uk/events/disp.asp?i=549451 , http://events.dailyecho.co.uk/events/disp.asp?i=562663 , http://events.dailyecho.co.uk/events/disp.asp?i=567576 ,