Hands-free Generative Design with Project Dreamcatcher

Regular readers of this blog will have heard mention of Project Dreamcatcher and how Autodesk Research is using it to explore the future of design. It’s a generative design technology that goes beyond the current “state of the art” – which means combinations of tools such as Revit + Dynamo or Rhino + Grasshopper – as it works backwards from goals stated by the user, rather than expanding and evaluating options that have been programmed in.

One great way for people with a programming background to “get their heads around this” is to relate it to the world of programming languages. The best analogy I’ve heard is that we’re moving from the realm of procedural programming, where we explain how to do things, into the realm of declarative programming (such as some of you may have experienced with languages like Prolog), where we explain what we want to achieve.

This is all well and good, but it has presented a number of gnarly User eXperience (UX) challenges that have so far been very hard for the team to tackle. How do you effectively capture design goals? Every design challenge is different, so while – understandably – the answers will always be different, won’t the questions themselves mostly be different, too? Where do you even start with formalizing such a system?

The UX research team decided to take a step back and see what could be achieved with custom hardware, rather than relying on classic input devices. The result is a hands-free peripheral device that can sift through ambient theta and delta waves and encode them into formal goals that are understandable by Project Dreamcatcher. If you’re “old school” you can always connect the D4D up to your PC or Mac using the supplied USB cable, but it’s really most effective when configured to communicate directly with Project Dreamcatcher’s cloud-based back-end…

Read more

Leave a Comment