Maplesoft Blog

The Maplesoft blog contains posts coming from the heart of Maplesoft. Find out what is coming next in the world of Maple, and get the best tips and tricks from the Maple experts.

My son Eric began high school this year (grade 9) and a marvelous thing happened. In my previous posts, I lamented that I was generally unable to spark in him an interest in math but something changed this year. The first sign was his first math test given within the first two weeks of the new year. It was an assessment of sorts to see who knows what, and he scored 90%. Although it was a review of basic arithmetic with complicated fractions, order of operations, and such, this was the first time he had ever ranked within the top few of his class in math. Fast forward a few days. He came up to me with a large grin and said “Dad, you’re in my math text book!” Actually it wasn’t me but there was an indirect reference to Maple in one of the later chapters of the book that he was perusing out of curiosity (another good sign). “This is your stuff isn’t it?” With tears welling up inside, I proudly answered “yes.”

A favorite diversion of mine (and of many around the Maplesoft office) is xkcd. Its author, Randall Munroe, bills it as “a webcomic of romance, sarcasm, math, and language.” Since 2005, he’s been entertaining many self-proclaimed geeks with his unique and slightly skewed jokes on technology, computer science, mathematics, and relationships.

I really like the post in which a substitute teacher – hm, Mr. Munroe......

In my previous posts I have discussed various difficulties encountered when writing parallel algorithms. At the end of the last post I concluded that the best way to solve some of these problems is to introduce a higher level programming model. This blog post will discuss the Task Programming Model, the high level parallel programming model introduced in Maple 13.

Unless you’ve spent the past five years on an isolated island in the middle of the Pacific, you’ll have heard of Facebook and Twitter and LinkedIn and MySpace and Flickr. Social media sites: whether you love them, hate them, or just don’t get them, they’re going to be here for a while. If you’re like many of us, you may have a few accounts on these sites, whether you’re a power user or occasional dabbler. Social media allow us to re-connect with old friends and colleagues, share our thoughts – and photos, advertise, network... and generally waste time. :)

The evolution of written language started in earnest in 3500 BC with Cuneiform, spurring a step-change in the volume of information that could be recorded and transmitted over large distances.

This evolved into wide spectrum of other methods of information transmission. The first transatlantic telegraph cables, for example, were laid in the mid-to-late nineteenth century by information pioneers – industrialists who saw the vast benefit in increasing the rate of information exchange by many orders of magnitude. This led to a Cambrian explosion in the sheer volume of information transmitted internationally, increasing trade and commerce to hitherto unseen levels.

In my previous posts I discussed the basic difference between parallel programming and single threaded programming. I also showed how controlling access to shared variables can be used to solve some of those problems. For this post, I am going to discuss more difficulties of writing good parallel algorithms.

Here are some definitions used in this post:

  • scale: the ability of a program to get faster as more cores are available
  • load balancing: how effectively work is distributed over the available cores
  • coarse grained parallelism: parallelizing routines at a high level
  • fine grained parallelism: parallelizing routines at a low level

Consider the following example

In the previous post, I described why parallel programming is hard. Now I am going to start describing techniques for writing parallel code that works correctly.

First some definitions.

  • thread safe: code that works correctly even when called in parallel.
  • critical section: an area of code that will not work correctly if run in parallel.
  • shared: a resource that can be accessed by more than one thread.
  • mutex: a programming tool that controls access to a section of code

Green is definitely the color of the 21st century. Recently, I was attending the annual conference of the Society of Instrumentation and Control Engineers. The keynote was delivered by Dr. Tariq Samad of Honeywell and the President of the IEEE Control Systems Society.  The talk was on various dimensions in advanced control – past, present, and future, and in particular Dr. Samad summarized some fascinating work being done in the natural resources industry on advanced control.  Through his very interesting and engaging talk, my generally conservative brain went into green mode.

Dr. Samad gave a couple of examples of massive engineering undertakings that deployed highly sophisticated control strategies at unprecedented levels of innovation and complexity. The Olympic Dam mining operation in Australia is the largest PC-based deployment of digital control techniques in history, with over 500,000 I/O points. There are major applications of model-predictive control (control strategies where the controller has inherent knowledge of plant dynamics) in traditional coal power plants that will immediately reduce the harm from these plants and set the stage for the introduction to alternate power generation.

In my previous post, I tried to convince you that going parallel is necessary for high performance applications. In this post, I am going to show what makes parallel programming different, and why that makes it harder.

Here are some definitions used in this post:

  • process: the operating system level representation of a running executable. Each process has memory that is protected from other processes.
  • thread: within a single process each thread represents an independent, concurrent path of execution. Threads within a process share memory.

We think of a function as a series of discrete steps. Lets say a function f, is composed of steps f1, f2, f3... e.g.

Selling a company is emotionally wrenching. It was even more intensive for us at Maplesoft since we had a large number of founders who had been actively involved with the company for 20+ years. The decision for founders to sell a company so that it can move to the next stage is truly massive.

We had the luxury of a number of suitors with essentially equivalent initial financial offers, but this also destined us for a long process with lots of discussion and many twists and detours along the way. I remember the saying: that which does not kill you makes you stronger ;-). This was also my 2nd time going through the full process and maybe I can offer some advice to novices, noting the old saying that free advice is not always worth the price.

Sometime in 1992 I was offered the title of “Applications Engineer” at Maplesoft. I was the company’s very first employee to hold this title and it was my first real job.  I was thrilled! Imagine, if you will, an impoverished student who had been living on the most pitiful of incomes for almost ten years, all of a sudden being offered a great salary and the chance to travel and meet interesting people around the world! And for the most part, all I had to do was show people how great this thing called Maple was.

Computers with multiple processors have been around for a long time and people have been studying parallel programming techniques for just as along. However only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?

Here are some definitions for terms used in this post:

  • core: the part of a processor responsible for executing a single series of instructions at a time.
  • processor: the physical chip that plugs into a motherboard. A computer can have multiple processors, and each processor can have multiple cores
  • process: a running instance of a program. A process's memory is usually protected from access by other processes.
  • thread: a running instance of a process's code. A single process can have multiple threads, and multiple threads can be executing at the same on multiple cores
  • parallel: the ability to utilize more than one processor at a time to solve problems more quickly, usually by being multi-threaded.

For years, processors designers had been able to increase the performance of processors by increasing their clock speeds. However a few years ago they ran into a few serious problems. RAM access speeds were not able to keep up with the increased speed of processors, causing processors to waste clock cycles waiting for data. The speed at which electrons can flow through wires is limited, leading to delays within the chip itself. Finally, increasing a processor's clock speed also increases its power requirements. Increased power requirements leads to the processor generating more heat (which is why overclockers come up with such ridiculous cooling solutions). All of these issues meant that is was getting harder and harder to continue to increase clock speeds.  The designers realized that instead of increasing the core's clock speed, they could keep the clock speed fairly constant, but put more cores on the chip. Thus was born the multi-core revolution.

My name is Darin Ohashi and I am a senior kernel developer at Maplesoft. For the last few years I have been focused on developing tools to enable parallel programming in Maple. My background is in Mathematics and Computer Science, with a focus on algorithm and data structure design and analysis. Much of my experience with parallel programming has been acquired while working at Maplesoft, and it has been a very interesting ride.

In Maple 13 we added the Task Programming Model, a high level parallel programming system. With the addition of this feature, and a few significant kernel optimizations, useful parallel programs can now be written in Maple. Although there are still limitations and lots more work to be done on our side, adventurous users may want to try writing parallel code for themselves.

To encourage those users, and to help make information about parallel programming more available, I have decided to write a series of blog posts here at Maple Primes. My hope is that I can help explain parallel programming in general terms, with a focus on the tools available in Maple 13. Along the way I may post links to sites, articles and blogs that discuss parallel programming issues, as well as related topics, such as GPU programming (CUDAOpenCL, etc).

My next post, the first real one, I am going to explain why parallel programming has suddenly become such an important topic.

It’s been nearly ten years since I first walked onto the University of Waterloo campus as a freshly minted undergraduate, bright-eyed and bushy-tailed and eager to learn all about electrical engineering. I guess it’s hard to believe the speed with which time passes. It’s actually a bit astonishing how much I can still remember about orientation, or “frosh” week, like 4 a.m. fire drills, a very messy obstacle course, sitting with 800 other young engineering students in a lecture hall, and above all, meeting new friends.

Recently, we were asked by a designer of thrill rides if we could help them define a design tool that would allow them to push the envelope in rider experience, while considering engineering constraints and, of course, rider safety.

First 11 12 13 14 15 16 17 Page 13 of 18