Seminarios de Investigación

|
Seminario: "Un informático trabajando entre biólogos en el CNAG (Centro Nacional de Análisis Genómico). "
|
Fecha: 22/05/2017
Hora: 12:00<br><br>
Lugar: Aula 1 del Ada Byron <br><br>
Ponente: Santiago Marco-Sola <br><br>
RESUMEN:<br><br>
En biotecnología, el reciente desarrollo de la secuenciación de alto rendimiento o HTC (high-throughput sequencing) ha supuesto uno de los avances más relevantes para la investigación moderna en biología y biomedicina. Este conjunto de protocolos y tecnologías permiten la secuenciación del genoma de un individuo en cuestión de días. Sus aplicaciones van desde la medicina diagnóstica pasando por la biología evolutiva hasta la investigación bioquímica y molecular. De este modo, la secuenciación se ha convertido en el caballo de batalla de multitud de grupos de investigación, protagonizando muchos de los descubrimientos científicos más relevantes de la última década.<br><br>
Actualmente, el CNAG (Centro Nacional de Análisis Genómico) es capaz de secuenciar hasta 1.5 terabytes de información genómica al día. Equipados con 3472 nodos de computo y 7.6 petabytes de almacenamiento, el CNAG se encarga de procesar, analizar, clasificar y almacenar billones de secuencias de ADN diariamente. Para hacer frente a este reto, muchos grupos de investigación en algoritmia y computación investigan nuevos algoritmos y metodologías de análisis, con el objetivo de desarrollar herramientas eficientes, escalables y flexibles para el análisis genómico en bioinformática.<br><br>
En esta charla presentaré un resumen de los protocolos, herramientas y algoritmos bioinformáticos que un centro de secuenciación masiva como el CNAG emplea rutinariamente. Del mismo modo, introduciré los principales retos computacionales a los que nos enfrentamos, futuras tendencias en bioinformática y biotecnología, así como nuevas e interesantes oportunidades para informáticos interesados en formar parte del mundo de la bioinformática.<br><br>
PONENTE:<br><br>
Santiago Marco-Sola se licenció en ingeniería informática por la Universidad de Zaragoza (Centro Politécnico Superior) en el 2010. Posteriormente, obtuvo el máster en computación por la UPC (Universidad Politécnica de Barcelona) en la especialidad de algoritmia y programación. Durante los últimos 6 años, ha trabajado en el "Bioinformatics Development & Statistical Genomics Team" del CNAG (Centro Nacional de Análisis Genómico). Durante este periodo desarrolló su tesis doctoral centrada en técnicas eficientes de alineamiento de secuencias genómicas aplicadas a la secuenciación de alto rendimiento (high-throughput sequencing). <br><br>
Actualmente es profesor en la Escola d'Enginyeria de la UAB (Universidad Autónoma de Barcelona) e investigador en el grupo CAOS (Computer Architecture and Operating Systems). A su vez colabora con el grupo "Computational Biology of RNA Processing" en el CRG (Center for Genomic Regulation) en el desarrollo de herramientas de alto rendimiento para análisis de datos genómicos en bioinformática. Sus intereses de investigación incluyen algoritmos de indexación y alineamiento de secuencias, algoritmos de compresión y aplicaciones de alto rendimiento en entornos heterogéneos.

|
Seminario: " Ajuste automático de sistemas, algoritmos y simulaciones mediante optimización Bayesiana"
|
Ponente: Rubén Martínez Cantín (Centro Universitario de la Defensa, SigOpt)<br><br>
Lugar: Seminario A-25. Edificio Ada Byron<br><br>
Hora: 11:00 - 18 mayo<br><br>
Abstract:<br><br>
La optimización Bayesiana combina aprendizaje automático y técnicas de decisión óptima para proporcionar un método de optimización altamente eficiente. Permite obtener resultados similares a los de otros métodos de optimización global (algoritmos genéticos, simulated annealing, swarm intelligence...) con sólo una mínima fracción de las muestras, lo que permite ser utilizado en procesos muy costosos. Recientemente, su uso está creciendo de manera exponencial, tanto a nivel académico como industrial, gracias a las numerosas aplicaciones que se están
encontrando: desde el ajuste óptimo de parámetros de algoritmos, control de sistemas y robots, diseño de simulaciones, mejora de procesos de fabricación, diseño industrial, etc. Especialmente, el ajuste automático de modelos de "deep learning" está teniendo gran repercusión. En la charla se dará una revisión de la metodología, mostrando los avances recientes realizados dentro del grupo de investigación.<br><br>
Bio:<br><br>
Rubén es profesor del Centro Universitario de la Defensa y miembro del grupo de RoPeRT del I3A. Su investigación en los últimos años se ha centrado en el desarrollo de optimización Bayesiana y su aplicación al aprendizaje automático y la robótica. Actualmente es investigador en SigOpt Inc., que ofrece BoaaS: "Bayesian optimization as a Service".

|
Charla:Programming Molecules in the Age of Nanotechnology
|
Tendrá lugar el lunes 8 de mayo a las 12:00 en el aula A13 del Ada Byron <br><br>
Abstract<br><br>
When scientists combine computer science with the information-processing power of molecules, science fiction becomes a reality. Self-assembling, programmable systems at the nanoscale are poised to have a major impact on society, from personalized medical therapeutics to biosensors that could detect pollutants in our water or disease in your body. This talk will describe our work at Iowa State University aimed at using computer science and software engineering methods to design molecular programmed systems that are efficient, verifiably correct, and safe for use.<br><br>
Short bio<br><br>
Dr. Robyn Lutz is a professor of computer science at Iowa State University. She was on the technical staff of Jet Propulsion Laboratory, California Institute of Technology until 2012, most recently in the Software System Engineering group. Her research interests include safety-critical software systems, product lines, and the specification and verification of DNA nanosystems. She is an ACM Distinguished Scientist. She served as program chair of the 2014 International Requirements Engineering Conference, recently completed her second term as an associate editor of IEEE Transactions on Software Engineering, and is on the editorial board of the Requirements Engineering Journal.

|
Seminario: MIPSfpga: Using a Commercial MIPS Soft-Core in Computer Architecture Education
|
Tendrá lugar este viernes 5 de mayo, de 10:00 a 11:00 en el Aula A.05 del Edif. Ada Byron<br><br>
SUMMARY: In this talk I will introduce MIPSfpga and its accompanying set of learning materials. MIPSfpga is a teaching infrastructure that offers access to the non-obfuscated RTL source code of the MIPS microAptiv UP processor. The core is made available by Imagination Technologies for academic use and is targeted to an FPGA, making it ideal for both the classroom and research. The supporting materials and labs focus on hands-on learning that emphasizes computer architecure, System on Chip (SoC) design and hardware-software codesign. Among other things, students learn to set up the MIPS soft-core processor on a field-programmable gate array (FPGA), run and debug programs on the core in simulation and in hardware, add new peripherals to the system, understand the microarchitecture and extend it to support new features, experiment with different cache sizes and content management policies, add new instructions using the CorExtend interface available in MIPS processors, and understand SoCs in embedded systems and how they are designed and built up in layers to run complex software such as Linux.<br><br>
SHORT BIO: Daniel Chaver received the degree in physics from the University of Santiago de Compostela, Spain, in 1998, and the Electrical Engineering degree and Ph.D. from the Complutense University of Madrid, Spain, in 2000 and 2006, where he is currently an Associate Professor. He has taught many courses related to Computer Science and Electrical Engineering since 2000. He has co-advised 3 PhD thesis and has co-authored more than 40 papers. Since 2015 he collaborates with Imagination Technologies. His current research interests include: (1) architectural techniques for managing efficiently the memory hierarchy and (2) OS scheduling techniques for asymmetric multiprocessors.

|
Seminario: PMCTrack: delivering hardware monitoring support to the system software
|
Tendra lugar este jueves 4 de mayo, de 12:00 a 13:00 en el Aula A.05 del Edif. Ada Byron <br><br>
Summary:<br><br>
Hardware performance monitoring counters (PMCs) have proven effective in characterizing application performance. A large body of work has demonstrated that several components of the operating system (OS), such as the scheduler, can perform effective runtime optimizations in multicore systems by leveraging performance-counter data. While existing tools greatly simplify the collection of PMC data from user space, they do not provide an architecture-agnostic mechanism that is capable of exposing high-level PMC metrics to the OS. Thus, the implementation of OS-level PMC-driven optimization schemes is typically tied to specific processor models.<br><br>
In this talk I will present PMCTrack, an open-source tool for the Linux kernel that seamlessly enables the system software to access PMC data in an architecture-independent fashion, and also provides other insightful monitoring information available in modern processors, such as cache occupancy or energy consumption. Despite being an OS-oriented tool, PMCTrack still allows the gathering of monitoring data from user space, making it possible for users and developers to perform offline analysis in various ways.<br><br>
Short bio<br><br>
Juan Carlos Saez received his Ph.D. in computer science in 2011 from the Complutense University of Madrid (UCM), where he also obtained the Extraordinary Doctorate Award. He is now an Associate Professor in the Department of Computer Architecture at UCM. Since 2013, he has served as the UCM Campus Representative of USENIX, the Advanced Computing Systems international association. In the last few years, he has been teaching different courses related to Operating Systems and Computer Architecture. His research interests include energy-aware computing and improving the interaction between the system software and hardware for emerging architectures. His recent research activities focus on OS scheduling on heterogeneous multicore processors, exploring new techniques to deliver better performance per watt, and quality of service on these systems.
Lugar: Seminario 25 Ada Byron.<br><br>
Hora: 11:15 27 abril 2017..<br><br>
Ponente: Toby Collins<br><br>
Abstract.<br><br>
An important yet unsolved problem in computer vision and Augmented Reality (AR) is to register the 3D shape of deforming objects with live 2D videos. This has important applications in Augmented Reality (AR), Computer-Assisted Intervention (CAI), Computer Graphics and 3D scene understanding. At the CNRS Endoscopic and Computer Vision (EnCoV) lab I worked on this topic with Prof. Adrien Bartoli to develop robust, real-time systems. In this talk I will discuss two state-of-the-art systems. The first is Laparaug, a system to improve laparoscopic surgical intervention in gynecological surgery. This is designed to help the surgeon locate hidden sub-surface structures, including vessels and tumours. The structures are first segmented in a pre-operative MR or CT scan, and then augmented onto the laparoscope's live video in real-time to guide the surgeon. The second system is a state-of-the-art approach to densely track generic deforming objects in 2D videos, with both medical and non-medical applications. This was the first dense approach to be demonstrated live, at the International Symposium of Mixed and Augmented Reality (ISMAR) 2015 and The European Conference on Computer Vision (ECCV) 2016. I will also discuss the main open objectives and future trends in the field.