The Photography of René Burri

René Burri was a Swiss photographer known for his photos of cultural events. He proposed to photograph the inauguration of the Swiss architect's chapel at Ronchamp for the Zurich-based magazine Weltwoche (1).

René Burri. Inauguration of the Chapel of Notre Dame du Haut, built by Le Corbusier, Ronchamp, France 1955. Photograph, gelatin silver print on paper. Image: 346 x 524 mm

René Burri. Inauguration of the Chapel of Notre Dame du Haut, built by Le Corbusier, Ronchamp, France 1955. Photograph, gelatin silver print on paper. Image: 346 x 524 mm

(1) Daniel Maudlin, Marcel Vellinga. Consuming Architecture: On the occupation, appropriation and interpretation of buildings, 2014. Routledge.

When the House is the Art

The City of Detroit and its real estate landscape has proven to be a unique environment for 'Architecture as Art' and 'Artists as Developers'. This counters an argument made last year by Lance Posey in the Huffington Post that "the concept of architecture as a form of art is not only misleading to the public, but also potentially damaging to society."

Cleaning Twitter JSON data in Processing

I won't go into the method of collecting tweets using python; but instead direct you to this article that uses the Tweet Search script to mine for tweets and save them to a JSON file. This article launches right into the process of opening the saved JSON file. 

import json

tweetLocations = []
with open('../slippy_map_test/festival_2017-09-25.json') as f:
    for row in f:
        data = json.loads(row)
        geo = data['geo']

The first line: import json, loads Python’s library that handles JSON files. The following line, opens the JSON file (in this case it's Twitter data with the festival hashtag) and saves it to a variable, in this case it's called f (for file). Then, loop though each row of JSON objects in the file. The load() method parses (just like in Javascript) each row of tweets into a Python data structure and assigns it to a variable, called data. Each row has a key:value pair (in this case it's geo) that will be used to render the markers to the map.

If quite a few rows have null values, it's best to filter out these empty values before appending to an empty array. In this example, the coordinates had a significant number of empty values. Print geo (similar to console.log in Javascript) to verify in the Processing console that you've cleaned the correct data.

if geo is not None:
    tweetLocations.append(geo)
    print geo

Whose [Digital] Public Space is this?

Recently, I had the pleasure of hearing performance and new media artist Amir Baradaran of Columbia's Computer Graphics and User interface Lab talk about one his personal projects, Frenchising Mona Lisa. In 2011, Baradaran installed a 52-second performance streaming live over Leonardo da Vinci's Mona Lisa using an Augmented Reality (AR) Smartphone application where she unfurls her hair and wraps a French flag around herself in the form of a hijab. Baradaran brought to light a contradiction in French society that seems to tolerates the scarf around Mona Lisa's head (or, say, those made by Hermès and worn by women along the Rue Saint-Honoré) with the banning of the hijab in public schools.

This project at the Louvre Museum caught my attention because it was less about gaming, less an educational tool and more about social commentary. Even though this project is six years old, a couple issues seem ever prescient: the use of VR/AR as a viable medium for artists; and the ownership of physical digital locations.

Recently, Snapchat launched ART, a new Lens inside the multimedia platform that places digital artwork and sculptures of artists into geo-tagged physical spaces in the public realm. Their first partnership with Jeff Koons places his artworks in various cities around the world including Champ de Mars, Paris; Hyde Park, London and Copacabana, Rio de Janeiro Brazil. A day later, artist Sebastian Errazuriz uses augmented reality to "vandalize" Koons' Balloon Dog by placing a 3D digital recreation on top of a photo of the same geo-tagged location.

Errazuriz is asking some interesting questions about how our digital public space should/could be managed: should AR experiences be governed by similar rules to those renting out physical spaces (public or private)? should corporations be allowed to place whatever content they choose over our digital public space? should they be able to do this for free while bombarding us with their advertising or whatever? should we be able to choose or approve what can or cannot be geo-tagged to our own digital public space?

Last summer Pokémon Go, a location-based augmented reality mobile game took the country by storm and had a major impact on the public realm. For now, let's forgo the apps ability to track your location, access your phone’s camera and all of your personal information. Urban planners rejoiced at the millions of people who flocked to the streets, shopping malls and landmarks looking for monsters and Pokéstops, breathing life into underutilized public spaces and proving that people will walk for the right incentives.

Within weeks, sensitive locations like Arlington Cemetery and the Holocaust Memorial Museum asked players to avoid "hunting for monsters" on their sites and issued statements via social media:

Niantic, the company responsible for Pokemon Go removed the PokéStops and gyms from the Holocaust Memorial Museum in Washington DC at the museum's request, even though they had no legal obligation to do so and set up a process for removing them from unwanted locations.

Pokemon Go request form for removing PokéStops

Pokemon Go request form for removing PokéStops

This again begs Errazuriz's question regarding our (public or private property owners/managers and the users of these spaces) ability to choose or approve how our digital public space is used and the current protocol for displaying art (or whatever these monsters are categorized as). Can similar processes be appropriated from urban planners and policy makers to help inform future digital public space management?

Pokemon Go | PokéStop near the North Pool at the 9/11 Memorial (Source: New York Daily News)

Pokemon Go | PokéStop near the North Pool at the 9/11 Memorial (Source: New York Daily News)

Currently, the protocol for displaying art in the public realm requires consultation with the owner, manager or government agency responsible for the public or city-owned space. For example, projects in privately owned buildings (such as museums, opera houses, shopping malls) must obtain permission from the building owner; while projects on the street must obtain a street activities permit from the police department.

Visualizing Data {Earthquakes}

Using USGS data, I visualize the magnitude and location of earthquakes that have occurred on a particular day. Disclaimer: This data includes ‘quarry blasts’ and ‘explosions’ that can be easily filtered out.

All Earthquakes from one day | 09.27.2017

All Earthquakes from one day | 09.27.2017

The first steps in this process are: create a new Processing sketch; download and clean the data; then import the cleaned data into the sketch. Pretty straightforward.

CREATE A NEW PROCESSING SKETCH

The main sketch starts with two functions: setup and draw. The setup() block runs once and is used for initializing the sketch by setting the screen size (this must be the first function inside the setup block), or applying color to the background. The draw() block runs repeatedly and is used for animation. Save the sketch as earthquake_viz.

def setup():
    size(1200, 600);

def draw():
    background(44, 62, 80)

DOWNLOAD AND CLEAN THE DATA

Download “All Earthquakes” from the “Past Day” on the USGS Earthquake Hazard Program website. It will be in csv format. Create a data folder inside the earthquake_viz sketch and save the csv as earthquakes.csv. Create a new Google Sheet and open the earthquakes.csv file. Freeze the header row and delete any rows that have empty cells in the 'mag' (earthquake magnitude) column.

IMPORT CLEANED DATA INTO SKETCH

Using Python's core library for handling csv files, include the import statement at the very top of the sketch file. Add the following with block to open the csv file.

with open("earthquakes.csv") as f:
    reader = csv.reader(f)
    header = reader.next() # Skip the header row.
        
    print header

Google Knowledge Graph Search

Recently, I built out a quick search widget using the Google Knowledge Graph Search Widget. The widget is a Javascript module that allows you to add topics to your input fields (I've limited mine to People) such as Movies, Places, etc. When a user starts typing, the widget finds a relevant match and it's rendered to the DOM.

artist-search-widget.gif

In this project, I called the KGSearchWidget() passed in a configuration object, limiting the language, type (here I chose Person) and event handler for selecting an item. In the config variable, I limit the number of results in the dropdown, the language searched, the entity type I want returned and the maximum number of characters in the description of each entity. 

var config = {
  'limit': 10,
  'languages': ['en'],
  'types': ['Person'],
  'maxDescChars': 100,
};

config['selectHandler'] = function(e) {
  console.log(e.row);

  artObject = {
    name: e.row.name,
    desc: e.row.description,
    content: e.row.json.detailedDescription.articleBody,
    url: e.row.json.detailedDescription.url,
    image: e.row.json.image.contentUrl
  }
}

KGSearchWidget(KG_API, document.getElementById("myInput"), config);

My First Twitter Bot

I've been working on this twitter bot occasionally for the past several months and decided to write a three-part series about the process: creating the twitter app, setting up the bot to post images, and hosting the app in the cloud. This bot uses the Rijksmuseum API to tweet out random images from the Amsterdam museum's collection (they made their public domain collection available in order to extend their reach beyond their own website) and is built using node.js (an open source server framework) and the Twit node package (Twitter API Client for node).

rijksmuseum-twitter-bot.png

CREATE A TWITTER APP

Before writing any code, create a Twitter app and get the API keys. Open a new tab in Sublime Text (or your IDE of choice) to hold the API keys and name it config.js. Replace the x's with the matching key values to authenticate:

var config = {
    api_key:  'xxxxxx',
    consumer_key:  'xxxxxx',
    consumer_secret:  'xxxxxx',
    access_token:  'xx-xxxx',
    access_token_secret:  'xxxxxx'
}

module.exports = config;

In order for the bot to make calls to the API, first install the Twit node package and include it in the server.js file with require(). Pull in the Twitter API keys stored in the config file (store the Rijksmuseum API key here as well). Make a Twit object that connects to the APIs and call the post() function to post an actual tweet.

// Use the Twit node package
var Twit = require('twit');

// Pull in all Twitter & Rijksmuseum authorization info
var config = require('./config');

// Make a Twit object for connection to the API
var T = new Twit(config);

// Test the post function
T.post('statuses/update', { status: 'i am a bot, i am a bot' }, function(err, data, response) {
  console.log(data)
})

In Terminal, cd into the project folder and type node server.js to send the first tweet. Check your Twitter feed for new activity. The next article will explain how to set the bot up to post images from the collection. The final article in this three-part series on Twitter bots will explain how to host the bot on Heroku and store the image for posting on AWS S3 (Simple Cloud Storage Service).

Scraping Google Street View Images (the brute force way)

Before I learned how to scrape data from websites – I happen to use the BeautifulSoup package – I learned about this brute force method using a Python script to download images from Google Street View. I want to grab all the Google Street View images for a particular search value. I have a great interest in art and culture as social infrastructure so I'm going to query "museum"; but I can just as easily look up "gallery", "theater", or "cinema" etc. You get the idea. The results shown are: MoMA, Guggenheim, Met, New York Historical Society, Children's Museum, and Bronx Museum.

Manually "scrape" Google Maps by typing in the search term ("museum") and selecting all the results on the first page. Copy the selected results and paste into a blank Google Sheet. The only information relevant to this exercise is the latitude and longitude. Sort the results (Data > Sort Sheet by Column A-Z) and delete all the rows without an address.

At this point, all the data is in one column. Split the text into their respective columns and clean off the non-address portions. Highlight the column, split the text (Data > Split text to columns...), change the dropdown options from comma to space since this is the delimiter on which we'll will be splitting the data. Highlight the non-address columns and delete, shift rows to the left.

The address now needs to be concatenated together to include the city and state – in this case, it's New York, New York. To combine the results of several cells in Google Sheets (or MS Excel) paste the following formula into the first empty cell adjacent to your cell data. Include as many cells as you have filled with data and add a spacer (" ") between each.

=CONCATENATE(A1," ",B1," ",C1," ",D1, " New York, New York")

Copy down the formula by selecting the cell that has the formula, place the cursor in the lower-right corner until it turns into a plus sign (+) and drag the fill handle down. Great! Now we have addresses that can be turned latitude and longitude values using a batch geocoder. Copy the addresses and paste them into the Addresses field of the geocoder. Make sure:

  • Addresses are in: United States
  • Separate text output with: Tabs
  • Include these columns in text output: Deselect all checkboxes
Screen Shot 2017-10-10 at 11.05.25 PM.png

Now download the results and save in csv format. Open in Google Sheets to make sure it looks correct. In the next article, I'll explain how to get a Google Street View API key so that we can batch download the images using a Processing script.

How to Make a Simple Database Server

A couple months ago, I started writing an http server using Node.js; but had to stop because, life. I've written a few of these before, as the foundation for larger Node projects using Express for routing and MongoDB for data storage; but never used local data storage. This is the first of three posts. For the first part this program, I will be creating the database server using only what's available to me with Node.js. Before I jump into the code, a quick outline sets me up for a smooth(ish) execution:

  1. Create an empty Node.js project
  2. Install any dependencies, e.g. request
  3. Create the server and have it accessible from http://localhost:4000/
  4. Give it two routes: http://localhost:4000/set?somekey=somevalue and http://localhost:4000/get?key=somekey
  5. Store the key and value in local memory on the set
  6. Retrieve the key on the get

CREATING THE EMPTY NODE PROJECT

Create a new directory and cd into it.

mkdir simple-database
cd simple-database
npm init

This creates the package.json file to store the dependencies. Then, create the main file file that will contain the server code:

touch index.js

INSTALLING DEPENDENCIES

npm install request --save

Check the package.json file:

{
  "name": "simple-server",
  "version": "1.0.0",
  "description": "simple database server",
  "main": "index.js",
  "scripts": {
  },
  "author": "your name",
  "license": "ISC",
  "dependencies": {
    "request": "^2.79.0"
  }
}

Running these dependencies adds the node_modules folder in the root directory. Installing the request package allows you to make http requests. The --save option automatically adds the package to the package.json file and saves that version to the project in the node_modules directory. 

WRITING THE SERVER CODE

The first step in creating an http server is to require the http module. http modules are included when Node is installed so no additional steps are needed to import it. Using the http module's createServer( ) method, the server is now created. The createServer( ) method takes a callback function as a parameter. Every time the server receives a new request, the callback function is executed. At this point, you can check the terminal and browser window ... each should say 'This works.' If you refresh three times, you should see 'This works' logged three times in the terminal.

The callback function takes two parameters, a request and a response. The request object contains information such as the URL, while the response object is used to return the headers and content to the user making the request.

The callback function begins by calling the res.writeHead() method which sends an HTTP status code - in this case it's 200, an indication of success - and response headers to the user making the request. If you don't specify headers, Node will send them for you. Next, the server calls the res.end() method which tells the server that the response headers have been sent and the request has been fulfilled. It can be called with no parameters, but in this case, I've included a message 'This works' as an indication that the request has been fulfilled.

The listen() method activates the server on port 4000. 

var http = require('http');

http.createServer(function(req, res){
    console.log('This works.');
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end('This works.');
}).listen(4000);