A more readable Hacker News you can use today.

hn-new


Note
: This hack requires the Stylebot Chrome Extension (Still testing with Firefox CSS extensions!) and the free and beautiful Source Sans Pro.

For those that do not revel in the 1990’s nostalgia, I’ve hacked together what I think is a simpler, more readable Hacker News.

It’s a work in progress, but I’ve been using this for a couple weeks and have found it hard to go back.

Some fringe pages still need styling, I’d like to add some basic media queries and try again to get this working with a Firefox CSS extension.

Easy install here: Hacker News Base 0
PRs Happily Accepted: https://github.com/jtgi/hackerstyle/

Bonus: HN Markup for your viewing pleasure

tables&tables&tables

Enjoy!

Adding CSS animation timing function presets to Firefox Dev Tools.

About a month ago a rogue retweet from @patrickbrosset​ landed in my timeline:

I’d been on the lookout for a nice feature to attack as my first open source contribution and this seemed like just the right amount of fun and work to take on.

At the time the inspector already offered a convenient way to specify your own custom cubic-bezier function. You simply open up the tooltip through the inspector and adjust the handles until you get your timing function just right.

It looked like this:

Current Cubic Bezier Widget
Before

The motivation behind the patch was to provide a set of common mathematical functions for use with the css-timing-animation property. After some back and forth on design and requirements we settled on the addition of 31 presets under categories ease-in, ease-out, and ease-in-out. Here’s how it turned out:

cubic

What was there was already good but I’ve definitely found this to be a nice usability improvement. In some cases this may even serve to drive awareness of timing functions you might not know existed or forgot about.

As someone who wrote their first lines of code on Firefox, Firebug and the WebDev toolbar back in the day, I find it fitting this is my first contribution –even if a small one.

Thanks so much to Patrick and the Firefox Dev Tools team for all the help, I hope developers out there find this to be a useful addition.

Halos for Oculus Rift.

Screen Shot 2015-03-23 at 3.42.09 PM

Update: Now available on Android + iOS with Facebook leaderboard integration and accelerometer controls.

Halos for Oculus Rift DK2 – Download: Win64

The Plot

You are a rogue asteroid making your way to earth to inflict mass destruction on the planet you once loved, but the clever folks at NASA have set up an array of sparkly rings with electric particles. NASA has carefully calibrated halos (cheerios?) to completely disable your movement after 3 collisions.

Your job is to get to earth or reach a float overflow -whatever comes first. Don’t let the yellow planet down.

Okay, except I didn’t get time to make an asteroid, or earth, or integrate a story line in any meaningful way, but now you can imagine while you plummet towards nothingness.

TL;DR;

Halos is an infinite faller through rings. Highest score wins. It will probably make you nauseous.

I Can Make Video Games?

This is the first game I ever made. It’s written in the lovely Unity Game Engine and I have to say, while Halos is about as basic as it gets, it was easier than I thought to get reasonable graphics and gameplay in 3 days of dev.

Some Memorable Moments

  • When my character didn’t experience drag force and accelerated increasingly forever until float overflow. Nice. I ended up having to implement terminal velocity along with a lot of other airborne movement controls.
  • When my character skipped over the collision detection in the halos due to high velocity. I learned some graphics 101: objects can only move in discrete units bound by your frame rate. The typical pattern is to remember the last time you rendered a frame and then multiply the physics calculation by this delta to update an objects moving position. Except, if your object is moving at high speeds and your object passes your collision detector in between when a frame was rendered, you completely miss the collision. There’s a handful of approaches to solving this that are highly use case dependent. I ended up opting to increase the physics calculations (cpu hit), increasing the depth of my collision meshes (perf hit), and decreasing free fall speed (gameplay hit?).
  • Programming audio ease, pitch, and volume levels. To get that nice brisk wind sound as your velocity increases or to get that smooth calm breeziness when you sit on a platform in the middle of space. (It’s video games so solar wind makes sound in space.)
  • The GUI presents some pretty tough challenges for the Oculus. The best interaction I’ve seen thus far is simply casting a ray out from the center of your frame to an object and then showing a WiiU/Xbox Kinect like spinner as your option is confirmed. This is a little less than automatic at the moment so I opted for a basic push a key to start approach, but the whole orientation and on screen GUI confused me good, admittedly what is working now is not very sustainable nor ideal.

Results

I’m satisfied that with 3 long days of dev I was able to put out a basic game for the rift. I had originally planned to have the steering done entirely through the accelerometer of the Oculus, but was disappointed to learn that these APIs are only exposed in the native Oculus libs and to even be able to setup the interop to C# I’d have to pay for Unity Pro @ $75/month. As far as how interesting the game is, I’m not sure if the hard part is navigating through the halos or not getting nauseous. My current best is 3190 points followed by a 20 minute timeout. Play at your own risk ;)

If you’ve ever wanted to take a crack at game dev and shy’d away at the perceived complexity, give Unity a shot, it’s pretty straightforward and lots of fun.

Perhaps I will release a mobile version to crowd the app stores with another infinite scroller in the coming weeks.

What is DevOps?

This is a question I had pondered this weekend while attending Vancouver’s inaugural devopsdays. I had thought it was something along the lines of developers taking on more responsibility of their applications through its entire life cycle. And to some to degree that is correct, but to my surprise, very little discussed the failures of developers to take on more responsibility and manage their applications, but rather the frustrations of operations people overwhelmed with unwieldy IT infrastructures setup with short-term, ad-hoc methodologies.

What I learned is that regardless of your org structure and what you consider to be ‘devops’, the importance of somebody –whether it be an operations person or a developer– designing a systematic, reproducible, automatic IT infrastructure is critical to your organizations ability to ship fast, with quality and high morale. If these things are in place –and there is no shortage of tools to do so– then empowering developers to embrace these tools, deploy and maintain their applications becomes systematic.

DevOps doesn’t have to be some add-on to a developer or operations position, but rather a set of principles in which to create and manage infrastructure.

So that, to me at least, is what devops is.

Roasting. A short film.

We’ve managed to finish our little film for our New Media Images class. For best results watch on Vimeo in HD.

The three of us: April Kum, Yi Luo, and I really wanted to make something great, something far from anything resembling ‘student work’. I think we had our fair share of mistakes and moments, but on most fronts things went well, we were lucky to get someone who’s rather comfortable in front of a camera and easy to work with like Drew. This being our first real view into what making a film is all about –albeit at a sliver of the complexity, it was full of learning experiences. Here’s a few:

Do your interview first.
This may be common sense to some people out there, but there’s a few obvious advantages to doing it this way.

First, there’s a good chance you’re going to want to have visuals support what the subject is talking about. Naturally, that means you need to know what he’s going to talk about before planning the shots.  We didn’t realize how critical this was until our 6-hour shoot didn’t match up with what our subject was talking about.

Second, you can have a reasonable expectation of the story you want to capture, how you want to portray the subject, but that can change quite quickly when you begin interviewing them and beginning to understand them more. We found that to be the case with Drew and it changed the whole presentation of the film.

Third, we found storyboarding a documentary to be a bit of challenge, we had a few key shots we knew we wanted to get but its hardly as linear as a narrative might be. Which is why being able to sit down with the audio, and take time to listen to what he’s saying is very valuable. You’re able to capture the essence of what he’s trying to get at, and arrange it accordingly giving you a clear idea of what shots you need and in what order.

Have someone that represents  your audience do the interview.
When it came time to interview Drew we thought that with my knowledge of the coffee business I would be the best person to create a dialog around coffee and yield some great answers. To a certain extent that’s correct and I was able to create a good dialogue however, after listening to the recording, we quickly realized our conversation was full of subtle jargon and simple things only people in the coffee business know. Even basic things like the term, “centrals” for Central American Coffees is a bit vague. After returning to redo the interview with Yi as our interviewer, his answers were naturally much more appropriate to our audience.

Unless you want the interviewers voice in the audio, make sure you have the subject repeat the question in some way while he answers.
Another reason our first recording didn’t work out.

Some thoughts on the footage.
I learned that as the cinematographer you really have to objectify what you’re seeing in the camera and not let the shot and emotion in it blind you from its flaws. There were a handful of really great shots that just got ruined from pans that weren’t entirely smooth, from the subject not being in perfect focus, from the exposure a step too high, and on and on. Having a little bit more experience helps, but you need to detach yourself from the moment and look at the frame a little bit more clinically before diving into it.

Quality wise, I’m a little frustrated as to why Nikon hasn’t been able to release a firmware update for the D90 giving me a little bit more control, especially the ISO. Although I was able to dramatically improve most of the shots with Magic Bullet Looks, at times there’s some significant bleeding and issues with the image. Should I do more video work in the future I’ll be lining up a Canon to shoot with.

Last thoughts
Want to thank Drew Johnson of Origins Organic Coffee again for working us into his busy schedule and letting us cruise around with tripods, lights, and gear for a few days. Also wanted to thank my cool partners April and Yi for putting up with me for the whole project and trusting me to shoot and edit the footage. Hope everyone enjoys at least one moment of the short, if not two.

Update
We had our viewing yesterday in the theatre at SFU and although it slightly dragged on (just under 3 hours) it was great to see all our classmates projects.

Here were two of my favourites:
The Walk – By David Yao, Marcus Su, ChungWon Yang
Pierce – Pantea Shahsavani, Justin Ramsey, David Holicek

JG

First Chance

This is a very short sequential art project for my Systems of Media Representation class. Shot with a D90 a SB600, and some great helpers. Thanks again to Kansei, Michelle, and Naomi, couldn’t have made it without you guys.

The Whole World is Listening

A demonstration of how multiple machines can interact together on data produced by people around the world, all within an instant.

At the heart of this project is a cheap open-source electronics platform called “The Arduino”. This small board comes packaged with a programmable CPU allowing anybody with a little programming skill to control a variety of parts and ports. The chain of communication is quite impressive.  In this small project, The internet enabled Arduino executes scripts on my web server which queries Twitter for any tweets from the following cities:

  • Vancouver, Canada
  • Ueno, Tokyo
  • New York, New York
  • Melbourne, Australia
  • Sau Paulo, Brazil
  • Beijing, China
  • London, England
  • Pretoria, South Africa

The search is looking for any tweets containing the keyword, “Haiti” within the last 30 seconds. Depending on the data received, the Arduino activates LED’s according to the origins of the “tweets”.

The hardware in use are cheap parts bought from a local electronics store. The 3 programs involved are written using open-source free software, and access to Twitter’s data is also free. The most expensive piece is the Arduino at $50. With the amount of open data that exists on the web and the ability to buy incredibly high-quality computer parts for dirt cheap, anyone with a little bit of time and ambition can create some very cool projects.

Twitter allows people to communicate across the globe in an instant, in addition to this, many of these tweets are created using applications on cell phones, text messaging, web-ready devices, laptops, and machines.  Twitter by nature, uses text as it’s medium for delivery which enables machines to be able to organize, analyze, and work with the worlds data. Capturing these types of conversations between people around the world on such a large scale is unprecedented. Twitter stated on February 22nd, “Today, we are seeing 50 million tweets per day—that’s an average of 600 tweets per second.”

Computers and technology enable millions of people to interact through a variety of different devices, while communicating instantly with each other on a global scale.

Last Thoughts

The end result is a little bit difficult to take in, I think people have a general association with non-screen based objects that they are static, or if they are interactive, that the interaction is based on a fixed number of possibilities decided by a programmer at the time of construction. In my case the data is all live, real-time, the data creates the end result. To have such a simple looking black box communicating so dynamically with a service like Twitter, is a new kind of visual experience for many people. Had this project been entirely web based or even just displayed on a screen, I believe the impact would have been much lower, it’s the fusion between data produced electronically and represented physically that is intriguing.

Quick Video Demonstration

Thanks to Ryan Faerman for his brilliant Twitter Search API wrapper class, to the guys at Lee’s Electronics who answered all my questions with the utmost patience, and to Visual Culture class for inspiring this little experiment. Cheers to hopefully getting a good mark on it.