Cadmium Ongoing Story, Episode 9

Written: January 2014 for Cadmium
(a short-run student zine published by the OCAD U Student Union)


Just as she started walking she remembered she was going to hide a pecan under the tree. Hurriedly, she headed back inside.

I might as well get her a piece of the pie instead of just a pecan, she reasoned. So she opened the oven, took out the pecan pie she baked earlier, cut out a piece, and went back out.

Halfway to her neighbour’s, she stopped by the white oak tree.  She took a quick glance to make sure the squirrel was not around, then she stooped down and hid the pecan.

“Pecan for you!” 

Then it dawned on her, Funny, why do I like squirrels now? I used to hate them so much…

“Hello, anybody home?”, she shouted as she knocked on her neighbour’s door.

There was no answer. She pushed the handle and found it unlocked…


“Where do I go to incorporate a new country?” the Writer asked.

“You mean a new company? That’s provincial jurisdiction. You’re in the wrong place.”

“No, a new country. Where do I go?” “What? Secession? Don’t even think about that! No one has ever managed to pull that off!”

The Writer was not impressed. Astonished at the lack of customer service that he was getting at the Town Hall, he couldn’t but blink his eyes. Suddenly, he felt a strong blow to his head and passed out…


Original page:

Written: September–November, 2012 for Wikipedia
(INCD 6B02 coursework at OCAD)

DesignAge was a cross-disciplinary[1] action research programme within the Royal College of Art in the UK, founded in 1991 in partnership with the Helen Hamlyn Foundation to “explore the implications for design of ageing populations”[2] in the developed world. It was directed by Roger Coleman until 1999 when it was merged into the newly created Helen Hamlyn Research Centre.[3], [4], [5] The programme was the recipient of the Queen’s Anniversary Prize for Higher and Further Education in 1994 in the category of “the Arts”.[1]


By the early 1990’s, it was recognized that older adults, in particular adults over 50, were becoming an increasingly significant portion of the population, while improvements in nutrition and medicine were enabling these older adults to remain active. This demographic shift was thought to be permanent. However, the fact that the younger population represented a shrinking market and older population a growing one was largely ignored by the design profession. In response to the lack of understanding of these issues and the design community’s lack of understanding of their implications, in 1991 DesignAge was founded to investigate the needs of the older population, to interpret the results of the research in a way relevant to designers and industry, and to develop new methodologies in design and design education in response to this demographic shift.[2], [6]

DesignAge argued that older adults were “rendered disabled” by public spaces and transportation systems that had not been designed for this segment of the population, and therefore design had an influential role to play in shaping the future, in that improvements of the life for older adults, as well as the job market and national economy could be realized if designers, manufacturers, and retailers could shift their attitudes towards aging so as to collaborate to create age-friendly products and services. By pointing out that designing for the aging population was designing for their own aging, effectively reframing aging as an issue of self-interest, DesignAge was able to engage younger designers to design for older people.[7], [8]

One of the ways that DesignAge used to engage design students was to hold an annual design competition, called the DesignAge Competition, held between 1992 and 1998, to challenge design students to design for their “future selves.”[9]

DesignAge also engaged the industry at large by approaching the Design Business Association (formed in 1986 by the Chartered Society of Designers[10]) and suggesting a “product challenge” to their member agencies; these were small-scale[11] events where they would work with older users to design products on a speculative basis for the aging population.[5]

In 1999 DesignAge became the Age & Ability Research Lab[4] of the Helen Hamlyn Research Centre.

Selected publications

DesignAge produced a number of publications, including the seminal Designing For Our Future Selves published in 1993. Other publications include the “Designing for our future selves” special issue (volume 24, issue 1)[12] of the journal Applied Ergonomics published in 1993; Once in a Lifetime: An Evaluation of Lifetime Homes in Hull, published in 1995; and Working Together, A New Approach To Design, published in 1997.[13]


Within three years of its existence DesignAge reported that it was able to raise awareness of the issue within the design profession, in related disciplines including ergonomics, in education, and also among major retailers, manufacturers, and the age lobby.[7]

In 1994, DesignAge was awarded the Queen’s Anniversary Prize for Higher and Further Education in the category of “the Arts” in recognition of its contribution to the shift in perception towards designing for older adults and for working with corporations to design products for older adults and people incapacitated by illness.[1]

Notable collaborations

Design for Ageing Network

In 1994, DesignAge established a Europe-wide research network on design and aging called the DAN (Design for Ageing Network), funded until 1997 by DG V of the European Commission (then the Directorate-General for Employment, Industrial Relations and Social Affairs[14]). The network’s goal was to “develop the necessary expertise, know-how and understanding to enable design and industry to respond to the growing population of over-50’s in Europe in appropriate and life-enhancing ways” through the use of “in-depth collaboration with older people” that went “beyond simple measuring and questioning.”[15]

After DesignAge was subsumed into the Helen Hamlyn Research Centre, DAN continued to exist until early 2004 when it was superseded by the Include Network.[16]

Presence project

DesignAge also participated in an EU-funded project called Presence[4], which ran from 1997 to 1999[17] and whose aim was “enhancing activity and presence of older people in communities”[18].


  1. ^ abc"Previous Prize-winners". The Royal Anniversary Trust. http://www.royal​anniversary​​the-prizes/​previous-prize-winners​?archive​%5B​keywords​%5D​=​Royal​+​College​+​of​+​Art. Retrieved 25 September 2012.
  2. ^ ab“Breaking The Age Barrier”. DesignAge. June 1997. Retrieved 11 October 2012.
  3. ^"DesignAge". The Helen Hamlyn Research Centre.​archive/​hhrc/​programmes/​designage/​index.html. Retrieved 25 September 2012.
  4. ^ abc"Age & Ability Research Lab – History". Helen Hamlyn Research Centre.​290/​all/​1/​history.aspx. Retrieved 25 September 2012.
  5. ^ ab“Raising Our Game”. Design Week. Retrieved 26 September 2012.
  6. ^"About DesignAge". DesignAge.​web/​1998​07​17​01​43​26/​http://​designage​.rca​.ac​.uk/​DA/​about​DA.html. Retrieved 23 October 2012.
  7. ^ abColeman, Roger (1994). "Design Research for Our Future Selves". Royal College of Art. http://​research​online​.rca​.ac​.uk/​​404/​1/​coleman_​design_​research_​for_​our_​future_​selves_​1994.pdf. Retrieved 23 October 2012.
  8. ^Coleman, R.; Clarkson, J.; Dong, H.; Cassim, J. (2007). Design for Inclusivity. Ashgate Publishing Company. p. 26
  9. ^“The DesignAge Competition”. The Helen Hamlyn Research Centre. Retrieved 26 September 2012.
  10. ^“Key Dates in CSD History”. Chartered Society of Designers. Retrieved 26 September 2012.
  11. ^Cassim, Julia (April 2008). “The Challenge Workshop—a designer-friendly, cross-disciplinary knowledge transfer mechanism to promote innovative thinking in different contexts”. International DMI Education Conference Design Thinking: New Challenges for Designers, Managers and Organizations 14–15 April 2008, ESSEC Business School, Cergy-Pointoise, France. Retrieved 26 September 2012.
  12. ^"Applied Ergonomics, Volume 24, Issue 1, Pages 2-69 (February 1993) – science​". Elsevier B.V.. http://​​​science/​journal/​00036870/​24/​1. Retrieved 23 October 2012.
  13. ^DesignAge Publications. DesignAge. Retrieved 23 October 2012.
  14. ^“Mandate 1991–1997”, European Commission. Retrieved 26 September 2012.
  15. ^“What is the DAN?”. Design for Ageing Network. Retrieved 23 October 2012.
  16. ^“The Design for Ageing Network”. The Helen Hamlyn Research Centre. Retrieved 26 September 2012.
  17. ^The Presence Project. Retrieved 11 October 2012.
  18. ^ Presence. Presence. Retrieved 11 October 2012.

External links

Braille, bits, and blinds

Original page:

Written: March 2013 for the Blind Reading installation project

Although window blinds are not meant to convey information, by manipulating them in various ways, information can be conveyed. This can be thought of as a “covert channel.”

One way to convey information is by distinguishing between the “open” and “closed” states of the blinds, which can be thought to represent a binary digit, or bit. By successively manipulating the blinds into perceptible states of open and closed (representing 0 and 1, for example), one can theoretically transmit any message that can be represented by bits.

Computer encodings are not the only codes that are binary based: Braille, for example, can also be thought of as a binary code, consisting normally of 6 bits (called dots) in a Braille cell. The dots are read from the top-left corner, the left three dots top-to-bottom (1, 2, 3) then the right three dots top-to-bottom (4, 5, 6), and the knowledge of this numbered sequence of dots (as opposed to just the visual pattern) is what allows one to write in Braille using a stylus and slate.

In practice, an endless stream of ones and zeros is incapable of being separated into the correct cells. “Stop bits,” as they are called, need to be present between each cell for cells to be made out. In a real computer encoding, there will be an extra layer of encoding so that stop bits (which carry meta-information) can be distinguished from the real bits that carry information. However, in the case of window blinds, we can introduce a third state of “half open” to serve as the stop bit. Thus our encoding is no longer truly binary, but a ternary system.

Of course, the manipulation of blinds is ultimately constrained by physics: we cannot pull the blinds up or down too quickly, so there is a lower bound to how long it takes to convey one bit of information. Assuming, for example, that we need 10 seconds to pull the blinds up or down and another 5 seconds for a person to register whether the blinds are open or closed, then if we are using Braille as the internal encoding, and if we are using one window to transmit, and one non-bit pattern between encodings of Braille cells, it will take approximately 15 minutes to transmit just a 10-letter word.

Finer Points

Original page:

Written: April–May, 2012 for the “Video Subtitles” article in the Coursera wiki

(Note: This entire section is based on observation and is entirely unofficial, but parts related to the identity of stanford-bot have now been confirmed by both Coursera and Amara through their bug report channels.)

Dealing with the initial transcript

The initial set of subtitles is almost always produced by stanford-bot. This is a machine transcription program that goes through every uploaded video, transcribes the video automatically using speech recognition, and then mechanically cuts the transcript into subtitles. While you are probably aware that the quality of its transcription is less than perfect, the way it converts its transcripts into subtitles also has a few subtle kinks that will not be initially obvious but will interfere with your subtitling work:

  1. The original transcript will have hard returns, which are not removed when the transcript is converted into the initial set of subtitles. This means that if you add or remove words, the hard returns will be at the wrong places.

  2. These machine-transcribed subtitles are always generated, even if you have already finished a set; in other words, stanford-bot will overwrite your subtitles if you “started too early”.

For the second problem, if stanford-bot overwrites your subtitles, you have basically two options: you can revert to your version; or you can start over using stanford-bot’s transcript as a new base. Both are valid options and you can choose either (depending on, say, how much work you have finished), and in particular you are free to revert. The important thing is not to take it personally.

Reformatting the initial transcript

For the first problem mentioned above, you will just need to remove the hard returns (which are always present). Or you can consider reformatting the whole transcript before you start any work.

There are a couple of points to consider:

  1. Reformatting the entire transcript is a huge time commitment. Depending on your transcription speed and/or accuracy, this can in fact take as long as doing the whole transcript from scratch. If you still want to do this you and are using the web interface you might consider using a script (such as this set) to do some preliminary work for you.

  2. It has also been observed that reformatting the initial transcript will void any translations that are already present. This is probably something you want to keep in mind if you want to avoid unnecessary impact on other people’s work.

If you decide to reformat, you have two options after finishing:

  • Restart Step 1 of the subtitling process if you are using the web interface.
  • Upload your revised SRT file (but see caveat below).

Impact of SRT uploads

It has been mentioned above that the use of an external editor to edit SRT files is as good a way as using the web interface to edit the subtitles. However, this is not in fact the case if the video already has translated subtitles.

In short, if translations are already present, as soon as you upload your changes back,

  • you will be erroneously shown as the translator of all the translations, and
  • the revision history of the translation file will erroneously show that the real translators have contributed nothing.

This has been reported, but for some reason it was deemed not a problem. Reverting will not help. So if you don’t want these erroneous changes to take place, you might want to stick to the web interface and avoid using a text editor to edit subtitles.

Copy-editing the subtitles

Because these videos are lectures and translators are involved, there are a few things we might want to consider:

  • We want the subtitles to reflect what the lecturers intend to say.
  • Colloquial English, fillers, and tangential comments can confuse some (not all) translators.
  • English word order is not universal.
  • Something that takes three screenfuls to say in English might take only only one screenful to say in another language. Or the other way round.
  • Sometimes there is content in the bottom of the slide.
  • Unlike movies, viewers of our subtitles are probably not interested whether the lecturer was coughing. (stanford-bot transcribes coughs and other random sounds.)

From these observations we can make a few suggestions, some of which may contradict Universal Subtitles’ official recommendations:

  • It is probably best to correct slips of the tongue if the lecturer obviously intended to say something else.
  • It is probably best to edit out stutters, fillers (e.g., “sort of”, “like”, and “right” used as fillers), slips of the tongue that are subsequently corrected, and the like. This helps keep the subtitles concise and avoid confusing translators.
  • If the lecturer goes on a tangent to make a comment that interrupts the flow of the original sentence (e.g., “So, here is a factor.”) it is probably best to somehow mark off this tangential comment (e.g., using parentheses) to make it clear to translators that this is something that is not part of the original flow.
  • It is probably not realistic to aim for one-line subtitles. One-line subtitles can be too short for translation, especially if stutters, fillers, and slips of the tongue are not edited out.
  • It might be helpful to sometimes have places in the video with no subtitles, so that the viewer can see the bottom of the slide.
  • It is probably best to leave most random sounds untranscribed.
  • Correct punctuation helps translators tremendously.

What kind of time commitment to expect

As a rule of thumb, professionals often assume each video minute would typically take 3–5 minutes to transcribe (that is, not counting the time to sync the subtitles to the video). So a 20-minute lecture should take between an hour and 1 hour 40 minutes if you are a professional and if you are only doing the transcription.

Since we are probably not professionals and we probably want to finish the whole thing including syncing our subtitles to the videos, we are looking at much more than 5x the video length. It is not unreasonable to expect 15x (or more) the video time to finish a set of subtitles from scratch, i.e., about 5 hours (or more) of volunteering time for a 20-minute lecture.

Depending on the quality of the original transcript, proofreading of stanford-bot’s subtitles can take less time. It is possible to approach the “3x the number of video minutes” estimate with appropriate tools when proofreading entire videos.

Using Format