Posted by Omer

07 Apr 2016 — No Comments

Posted in Uncategorized


Photo by Killscreen

For VERSIONS 2016, hosted by Killscreen and NEWINC, I gave a workshop about perception and designing for the senses in virtual reality. The results were positive and I got some emails asking me to share this, so I wrote a short post on what it's about. Here are the slides:

After the presentation, I asked all the participants to exercise designing for VR, without a headset. The purpose is to use body intuition before using a computer, because VR is focused around the body. I've derived some of these methods from design methods rumored to be used in Mark Bolas' lab at USC (If anyone at the lab is reading this, please confirm!). In order to design what feels good in virtual space, start with shutting down the most obvious inputs.

To make that happen, you need:

  • Blindfold
  • Laser pointer
  • Sharpies
  • Large drawing pad
  • One million post-it notes.

Divide the participants into groups of 3. One is the designer, the other two help out. The designer's role is to point at things and say what they want to see there, the helpers create large post-it notes and stick them to walls, chairs, etc. There are three rules:

  • The Designer is blindfolded. They only have two chances to lift their blindfold. After the third, the design is final.
  • All of the designers instructions are given when blindfolded. Lifting the blindfold is only used to looking around. No talking, not pointing, certainly no touching.
  • No talking. Only the designer can speak, and they should feel completely alone while doing so.

Photo by Killscreen

With this, some hidden features emerge:

  • The designer is capable of spatial reckoning without audiovisual cues.
  • Our proprioception and motion planning skills can do a lot of design work, on things from menus to spaces to interactions.
  • Interaction and experience come before graphics.

Photo by Killscreen

The interesting thing I find with this is how many, many, people have good spatial reckoning skills embedded in them. They use them daily for processing instructions, planning routes to walk faster in the subway, reaching behind things, driving - these are core human skills that map trivially into VR.

Posted by Omer

30 Sep 2014 — No Comments

Posted in Uncategorized



I put this thing on twitter and it spread too quickly for me to understand what's going on. So here it is, somewhere I can see.

Posted by Omer

04 Apr 2013 — No Comments

Posted in ITP, Works

Here's something I did with Surya Mattu for James George's video art class at ITP. For now, it's a program that takes feeds from traffic cameras, extracts cars and turns them into sprites. There's some fun stuff coming up in part 2, so watch this space.

Tools: openFrameworks, OpenCV.

Posted by Omer

25 Mar 2013 — No Comments

Posted in Uncategorized, Works


tl;dr: I made Wikipedia:Random even more satisfying. It's at

Maybe it's just the way I select the people I hang around, but it seems to me I know nearly no one who can say they aren't the target audience for Wikipedia:Random. Personally, I keep my access to it as close as I can. For a while I even had typing in the address bar redirect there.

Screen shot 2013-03-25 at 07.54.20 AM

Wikipedia:Random is an amazing phenomenon. While it may turn out rubbish 50% of the time (unverified statistic), it also turns out pure gold, more often than I'd expect. That means, I guess, that the general importance of all the knowledge in the universe only has a partial ordering, and it's way more dense in the upper echelons than what you think. There is, however, a minute difference.

Wikipedia:Random isn't Wikipedia.

It's just not the same thing. People don't use that as a feature of Wikipedia, they use it recreationally, in a completely different context. It's a random fact generator, only better informed and better verified. is a wrapper for Wikipedia:Random. Only it has two things that make me like it better:

  • It has a big refresh button, with a pigeon on it.
  • Instead of a topic, say "Medulla oblongata", it would display the title as "What the fuck is a 'Medulla oblongata'?". I'm calling that a feature.

RandomFax - Hebrew

That's kind of the point, I guess. Making Wikipedia:Random as fun as it's supposed to be. Oh, and it's indexed by google, which means that I have a lot of weird traffic. But more on that later. is currently available in Hebrew. Once I get a strong enough server, I'll make it available in English as well. If anyone wants to help open it in another language (depending on the language, it requires a bit of NLP, but I can help with that), hit me up we'll make it happen.


Posted by Omer

09 Mar 2013 — No Comments

Posted in Uncategorized


Last September, I boarded a plane from Tel Aviv to New York to start a new big period in my life. I was about to begin studying at NYU's Interactive Telecommunications Program, a master's program in art and technology. The engine hum turned into a roar, gravity shifted to my back. My girlfriend held me tight and we were airborne. A month later, she took a flight back to Tel Aviv to finish her own degree in Architecture.

Being two busy students/artists makes it hard enough to see eachother, being 7 timezones away makes it impossible. When we're together, we try as hard as we can to distract eachother. There's always too much work, until someone has enough and sneaks something new onto the stereo. Then we dance.

Listening to music together is something as fundamental to us as touching eachother. There was no way around it, I had to find a way to do that again.

When I was visiting Tel Aviv a that winter, I saw how much things changed. Our apartment was now shared, our stuff heavily permuted, also - my raspberry pi finally arrived in the mail.

I took this opportunity to make a device that creates that shared space between us. I connected the Pi to the T-Amp in the room, installed raspbian, defined the proper port forwarding for communications outside the local network, and started installing some stuff.

I started by setting up shairport , an AirTunes server for her to use, using this tutorial by Eric Trouch . It worked pretty well, so I'm sharing my steps:

For the current version, you'll need mplayer, ffmpeg and youtube-dl: sudo apt-get install ffmpeg mplayer youtube-dl

Create your script file. I call mine yt, so I typed sudo nano yt. This opens the nano text editor. Into it, paste the following piece of code:

mplayer -vo null -cookies -cookies-file /tmp/cookie.txt $(youtube-dl -g -f 34 --cookies /tmp/cookie.txt "$1")

One line. That's it. Now make it runnable:

sudo chmod +a+x yt

That's what I did. And then, from an apartment in Williamsburg, I typed this to a computer in Tel Aviv:


A minute later I got a response.
Photo 22-02-2013 22 40 59
It works. We can make it through a few more weeks.

Posted by Omer

05 Mar 2013 — No Comments

Posted in Works

Oscillations (click for fullscreen) is something I wrote in processing.js. It's an experiment in iterated linear interpolations of trigonometric functions.


  • The mouse controls the sampling frequency
  • The up/down keys control the frequency multiplier
  • The left/right keys control the y phase.

Posted by Omer

03 Mar 2013 — No Comments

Posted in ITP


This is a little thing I've been working on for James George's class. It's a sketch that evolves a 2D image sequence from a single video line. Right now it uses some form of averaging, but soon I'll write a 1D cellular automaton to make it more interesting. Here are some results.

Here's the code:

 *  Type 1,2 for different 'flame' modes
 *	type 'c' to turn clipping on/off
 *	@author Omer Shapira


Capture video;
PImage img;
boolean clip = true;
int type = 0;

void setup(){
  size(640, 480);
  video = new Capture(this);
  img = createImage(640,480, RGB);

void draw(){

void update(){
  if (video.available() == true) {;

    int ix,iy;
    for (int i = width*height-1; i>=0; i--){
      ix = i%width;
      iy = i/width;
      if (iy >= height-2){
        img.pixels[i] = video.pixels[i];
        } else {
        	switch (type){
        		case 0 :
        			img.pixels[i] = averageColors(clip, img.pixels[i+width - 1], img.pixels[i+width] ,(ix==width-1? img.pixels[i+width - 1] : img.pixels[i+width + 1]), img.pixels[i+width*2]);
        		case 1 :
        			img.pixels[i] = averageColors(clip, img.pixels[i], img.pixels[i+width]);		


int averageColors(boolean clip, color... colors){
  float tempfloat = 0;
  int tempColor = 0;

  for (int i = 0; i<4; i++){
    int range = 255<<(8*i);
    for (color c : colors){
          tempfloat += (c&range);

    tempfloat /= (float) colors.length;
    tempColor += (!clip ? int(tempfloat) : int(tempfloat)&range);
    tempfloat = 0;
return tempColor;

void keyTyped(){
	if (key < '9' && key > '0'){
	type = int(key)%2;	
	} else if (key=='c') {
		clip = !clip;

Posted by Omer

28 Feb 2013 — No Comments

Posted in Works




I released the code I used to projection-map my Inverse Kaleidoscope (documentation coming soon). P5 Texture Map is a projection mapping addon I wrote for the project. It only uses Java and Processing (no external OpenGL libraries). Version 0.1 is now available on GitHub.


Posted by Omer

22 Feb 2013 — No Comments

Posted in ITP, Works

This is my first assignment for James George's class, Emerging Processes in Video Art. The pixel sorting program was written by me. It runs in real time.

Posted by Omer

20 Feb 2013 — No Comments

Posted in ITP

This week, Ryan Bartley and I set out to Brad's at NYU. We set up a node.js server on an Amazon EC2 instance, ran spacebrew on it, and wrote a little chat program to communicate. We also had too many beers. Enjoy.