Introduction to Software Engineering

Course Introduction

Nowadays and into the foreseeable future, software will play an essential role in many aspects of life. The dramatic increase in software size, complexity and scalability have designated a key role for software engineers, who aim to take an idea and translate it into a well-designed plan for a software-based application.

Software architects and designers develop abstract models of problems in contemplation of designing software-based solutions in the context of customer’s needs. A software designer should therefore be able to quickly gain profound understating of new problem-domains, as well as to be proficient in the tools and methodologies which exist in the software domain. Current paradigms for software engineering education, aimed at training software engineers as software architects and designers, often emphasize the software domain, neglecting the art of conceiving useful abstractions of new - often discipline-oriented - problems. Moreover, a typical software engineering course often lacks practical utilization of the theoretical methods and processes taught in class.

This course is a first-of-a-kind introductory course to software engineering, design and architecture is described. The course is based on designing and modeling a ray tracer for a virtual 3-dimensional graphical renderer, including the realization of the physics involved (light sources, rays, reflections, refractions, colors, occlusions, etc.). Students spend time understanding a new problem domain, and invest thought in designing abstractions, test cases and solutions. During the course, students integrate their basic knowledge of mathematics and physics, with their fundamental understanding of algorithms, data structure, object-oriented design and coding.  This, while routinely applying design refactoring, as is ordinarily the case in the practice of software engineering. 

Course materials are  shared under the creative common agreement in the following link: Course Materials

Cognitive Computing

Course Introduction

As a result of clock speed stagnation and the foreseen limitation of transistor density, traditional computing technology based on the Von Neumann architecture is facing fundamental limits. Articial Neuron Networks (ANNs) are therefore attracting increased attention. At the bottom of these eorts lies the ultimate goal to realize machines that could surpass the human brain, with some aspects of cognitive intelligence. In that sense, brain research and ANNs bear the promise of a new computing paradigm. 

One such framework was developed by IBM. IBM has developed the brain-inspired SyNAPSE chip. It is powered by an unprecedented 1,000,000 neurons and 256,000,000 synapses. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neuro-synaptic cores. It only consumes 70mW during real-time operation - orders of magnitude less energy than traditional chips. As part of a complete cognitive hardware and software ecosystem, this technology opens new computing frontiers for distributed sensor and super-computing applications.

In this course we will study the fundamentals of neuro-morphic computing, focusing on: the digital neuron model for neuro-synaptic cores; the Corelet language : a new programming paradigm that permits construction of complex cognitive algorithms and applications; algorithms and applications for networks of neuro-synaptic cores; and aspects of cognitive computing commercialization.