Category: NTNM Assignments

Groh: assignment 4

Drones may have a very large impact in my future. Since I want to be a reporter, I am always thinking about new ways to shoot film and how to create an interesting standup. If I am able to go to a location and use a drone to get ariel footage of something I wouldn’t normally be able to get to, it will increase my stature as a reporter. It will make me a go to person for information since I am able to get photos of things others people can’t. It will also make all my footage more dynamic. I will be able to follow rescue crews as they walk into a disaster site. What’s really exciting is that it will make me even more powerful as a one-man-band reporter. To do certain standups reporters need a cameraman. However, with a drone, I am my own camera man, and I can have it follow me where ever I go. So, I can increase the creativity of my shots.

assignment 3 Tony Yao

My chatbot is going to talk a little bit about which we NMM program used to work on. People can get access to the newest content and ask for new updates.


Assignment 3: Groh

After going to the Drone Journalism school this past weekend, I thought it would be valuable to have a destination where people can ask questions about drone law. The confusing laws of such a new technology can be confusing to navigate, so DroneLaw attempts to simplify the intricate legalese and provide novice pilots with the basic information they need to know to avoid breaking the law.

Click this link to learn about drone laws.

NTNM assignment 2 Tony Yao

During these classes, I have got the opportunity to get in touch with all these latest technologies. What interested me most is computer generated Virtual Reality. By creating a virtual world, it is possible to do things that we cannot do in the real world. Apart from Augmented Reality, computer generated Virtual Reality has a more limited use conditions while can also be used for repeatable purposes such as training and designing.

For my field test, I would like to create a Virtual Reality scene of assembling. I would like to test in the current technology limitation, is it more efficient to use virtual tools as training method or in the real world. Is it possible to create a virtual training program that will not only save actual materials but also time to actually learn something new. My plan is to set up a LEGO building simulation and ask people to do it in the real world as well as doing it in the simulation program. By giving the direction in animation or giving a manual, I want to compare the efficiency of both methods and reach a conclusion on that.

Assignment 1: 3D Model —Yi Zhang

It was really fun to see a 3D model of myself being printed out. During this project, I learnt some basic operations of modifying 3D models through Tinkercad.

I think the way of capturing and sharing 3D information online may change people’s life tremendously. The first possible application that comes to my mind is education. In the future, students will be able to learn things through 3D models, which will be easier for them to understand. Also, 3D model can  be used for online shopping. Sometimes, we buy some clothes that look really good on the models. But when we try them on, it looks not that fit for us. In the future, people may be able to upload their 3D model to shopping websites and try on clothes  to see the actual look on their body.

The Makerspace is a great place for students to work on some hand-made projects. We were introduced a fantastic machine. With 3D glasses, we could see vivid and detailed 3D models.


Assignment 1: Groh

The small model of myself did not come out as well as I thought. I learned a term called “overhang”. It is the area of the bust that requires vertical pillars that act as a support system to keep it upright when it’s being created. If there’s too much overhang there will be a lot of supports and it will be harder to end up with a clean final product.

With that being said, there is a great upside to 3D printing. Being able to upload 3D models of objects and share them on social media is a powerful too. Especially if those models are interactive. There is a lot of potential for educational benefits. Whether it’s looking at bodily organs or a physical place, being able to navigate something that was previously impossible to do allows us to share knowledge with a vast amount of people. However, certain challenges I foresee are the affordability of doing something like this, the technological requirements for people, and the problem of giving information to people that they are unprepared to understand. Spreading knowledge is good, but giving complex ideas to people without the necessary foundational understanding of it does nothing.

It's Me!
by jgroh
on Sketchfab

Field Test – Rickert


I wanted to know what the user experience would be when watching 360° videos at different angles in Google Cardboard. I decided to record static 360 videos at four different heights and then record moving 360 videos at two different heights. I then watched the videos in a Google Cardboard set and recorded my experience watching each video.



Giroptic 360cam ($499.00)

Monopod ($39.00)



I will feel most like I am in the space/in the video when the recording is shot at adult eye level.



Side note: When this camera records, it captures the sky (records above), but there is an empty space below where it cant capture. This is actually convenient because the recording won’t show the person or tripod that is holding the camera as long as it is placed directly below the camera. Unfortunately, I only had a monopod, not a tripod, so I am holding the camera on a monopod and trying to stay directly below the camera to stay out of the shot. It would have been much easier with a tripod.

Static Ground: This video actually turned out better than I expected. I thought there would be interference from the surface the camera was placed on, but there wasn’t. Though I didn’t feel like I was necessarily in the video, I think this angle would make for some really cool b-roll if anyone ever made a full-length video or film.

Static Child Eye Level: This video was taken from about 2-3 feet off the ground. I thought this video produced the most normal standing perspective. As a 5’4” girl, the car looked to be a normal height to me, and the man walking by looked a little tall, but that is normal for girls to be shorter than men.

Static Adult Eye Level: This video was shot from directly above my head, so approximately 5’7” off the ground. When I first watched this video I thought I had accidentally clicked the high shot and was watching that because I felt like I was at a bit of a bird’s eye view in this camera. This was not what I expected.

Static High: This angle was the worst for me. I actually started to feel dizzy when watching this video at this height. People walking by looked way too small and distorted. I don’t recommend using this perspective.

Moving High: Similarly to the static high shot, this video felt too out of perspective, too high, and I began to feel dizzy and sick within 10 seconds of watching the video.

Moving Adult Eye Level: Any movement in general while watching a 360 video when wearing a virtual reality head set is a little unsettling. I felt dizzy again when watching this video, but much less so than when watching the high perspective. This height made more sense to be moving with and it felt as if I was actually walking on the sidewalk myself.


Final Thoughts

My hypothesis was wrong; I expected the adult eye level shot to produce the best standing-in-the-video perspective, but in reality the child eye level shot did that for me.

I loved using the Giroptic 360cam. It is small, light and easily portable. It produced quality content and I didn’t have to do any stitching manually. I didn’t record any video especially close to objects, but still there was no ghosting. My favorite part was using the connected 360cam app on my phone with the camera. I was able to see what the camera was recording in real time on my phone screen and it was especially cool being able to press record from my phone. I used this for the on-the-ground angle so that I wasn’t next to the camera when it started recording. Lastly, it was really nice being able to simply connect the camera to my app through WiFi to instantly watch the video that I just recorded to know if it was good enough or if I needed to re-shoot. Overall, I really enjoyed using this camera and would even think about buying one when the price (hopefully) drops in a few months/years.

Thanks for this semester! I really enjoyed getting to learn about the different storytelling technologies that exist.

PHOTO mode

PHOTO mode

Vision Paper: American Robots – Veronica Ortiz

The year is 2216 and robots have been integrated into our lives. Most repetitive and manual labor has been taken over by non-autonomous robots. With unemployment at its highest ever, anti-robotic sentiments are on the rise.


The crowd:

“Fuck the robots!”

 “Go back to China!”


Ada Mahajan had never seen such a big group of people packed into Main Street. Many of them there were angry; they waved big signs in red ink over the mob and sported clothing all branded with the American flag. Most of them, however, were simply curious onlookers, drawn in by the noise and the latest hot-button issue that had everyone taking sides: robots. Two years ago, a bill was passed that allowed an unprecedented entry of robots into the workforce. While these robots weren’t highly intelligent or very autonomous, they were able to easily take over most repetitive and manual jobs.

They quickly embedded themselves into our everyday live; they checked our food out at the supermarkets, they helped us cash our checks at the bank, they even prepared our meals at some restaurants (this last one had been quite an infuriating discovery). With this massive influx of robots, most of them mass-manufactured and imported from China, many humans were left unemployed and out on the street. While governments rushed to find solutions, aide did not come quickly enough to help the thousands that had been affected by this technology. People were angry, and they were angry at the robots.

Ada had been pushing through the mob for a while, unable to see the edge in any direction. She had just gotten out of class and this was her usual route home; unfortunate. Beside her walked Isaac, helping push people out of the way and making a path for Ada to go through. His face was hidden behind a baseball cap and a hoodie; he ducked his head as he gently pushed on the crowd. Ada looked at Isaac’s behavior with disgust, but before she could raise her voice in objection, she bumped into a large figure in front of her.

The large man turned around on the spot, ready to face the source of this intrusion. “What gives?” exclaimed the man and, as he turned around, he noticed a ray of sunlight shine off of a sliver of Isaac’s metal face under his hood: a robot. “Hey…” the man had started to react to his robotic presence, but before he could rally attention to Isaac, Isaac had delivered a brisk chop to the throat, leaving him incapable of speech.

Isaac quickly took Ada’s hand and started barreling through the crowd, pushing people out of the way. As they cut their way through, faces in the crowd turned towards them, starting to react to the robotic presence among them. Before the crowd could fully retaliate, Isaac and Ada had sped down an alleyway and disappeared.

Far away from the crowd now, Isaac and Ada pushed into an almost empty bookstore to lay low for a while. They stepped in, and Ada waited at the doorway as Isaac scoured through each aisle of the small rustic bookstore.

“It is empty,” Isaac finally announced, triumphantly.

In response, Ada shoved Isaac—hard, and stomped over to the other side of the store. He followed her, inquisitive.

“You are upset,” Isaac discovered.

You are upset,” mocking him, “No shit, Sherlock.” she quipped.

“My name is Isaac.”

“I know that.”

“You know a lot of things, master Ada.”

“Don’t call me that.”

“Yes. I apologize.”

“What you did wasn’t okay. You can’t do that. That’s why– that’s why they… Ugh. You’re so frustrating.”

Ada said this last bit as she walked around the bookstore, pushing books to the ground as she strutted past them. Isaac looked on helplessly before finally deciding to walk behind her, picking up the books as she dropped them. With a loud sigh, Ada stopped him from continuing what he was doing, grabbing on to his unfinished metal skeleton.

“I don’t want you to protect me. I don’t want this. See what you’re doing? This is why they hate us– you. They hate y—” Ada stopped dead in her tacks as an old man suddenly appeared in the store, stepping in through a beaded curtain that separated the main store from the back room. He carried a large box full of books, which he gently laid down as he eyed down what was happening in front of him: a teenage school girl accompanied by a tall robot, dressed in civilian clothing and with an unfinished metallic exoskeleton.

Ada Mahajan knew that, in the middle of this anti-robotic movement, there wasn’t anyone out there who was just indifferent to the robots. This strange man could only be friend or foe, but Ada couldn’t tell which one was he. Still, the man did not know the most damning thing of all, Isaac wasn’t a regular working robot — this was an autonomous sentient robot capable of decision making and rational thought, the only one of its kind for all that Ada knew, and its only purpose was to protect her.


The crowd:

“Robot-loving fucks!”

 “American Robots, feed our families!”


Dev looked over the crowd through the glass walls of his tenth-floor office. A bearded pensive looking fellow, he observed the crowd intently while a group of business clad men sat on a table with one empty chair, fumbling with their thumbs and squirming impatiently. They all stared intently at the door and, as it suddenly opened, everyone stood up immediately. Dev, noticing the sudden movement in the room, turned around to see Sandra, his assistant, mince into the room holding a thin transparent tablet. They all sat back down; this is obviously not who they were expecting.

“Sorry, I- they haven’t heard anything yet,” Sandra apologized. Dev looked off into space, wondering what his next move might—“Mr. Mahajan?” she interrupted, “A word? Privately?” Dev stood up and motioned for the other people in the room to stay put. Sandra was a nervous looking girl, but her brilliance was something that Dev never took for granted; he knew to always follow her advice.

Following Sandra outside the room, they stopped in the middle of an empty overpass the connected two sides of the building. What the building lacked in size it made up in gorgeous design: it was sleek and functional with towers of light beaming from every direction and walls that also served as computers. Each wall had a different function, from projecting information to regulating the building’s environment. Documents could be passed from room to room just by swiping across the wall and the surfaces could change transparency with the push of a button.

This was American Robots, one of the largest designers and manufacturers of robotics in the country, and Dev Mahajan was the CEO. They had been the first to design a generation of robots that would not be used for the military, but would rather embed themselves into our everyday lives. Their technology was cutting edge, not very expensive and easy to reproduce. When American Robots’ mainframes were cyber attacked not long after the release of their robots, a Chinese version of these models started to appear; they were even cheaper than the original models, but they lacked some of the computing power of the originals. Now their company was in damage control, trying to salvage their revenue by turning to military work while at the same time trying to smooth out the international tension that had now been created between the U.S. and China.

Sandra turned around and, looking down at the information on her tablet, started barraging Dev with an influx on information. She nervously spoke over herself, and Dev couldn’t understand what she was saying. A small spherical robot, FINN, hovered over Dev and tried to get in closer to Sandra in order to better pick up what was coming out of her mouth, but, still, all Dev could read were jumbled words.

Dev was deaf and, while in the past this could’ve been an impediment to his success, his ASL translator worked wonderfully in communicating with hearing people.  FINN was designed to pick up spoken words and translate them into text that Dev could read on his smart glasses.

Sandra continued frantically: “Okay, so the last sighting of Zhou was down 56th street when she stopped to get some “authentic” food. Now, various witnesses on the scene claim that they saw—”

“Sandra… Sandra. Sandra.” Dev signed her name over and over again, and the name “Sandra” came out through the speakers on FINN. It was not a monotone metallic voice; the voice that was translated from the signs picked up by Dev’s ASL translating gloves sounded like the human voice a forty year old man would produce, complete with emotional gravity and intonation.  Dev finally grabbed her arm to get her attention and she looked up in return, stopping her verbiage.

“FINN can’t pick up what you’re saying so fast,” he signed.

“Oh, sorry. Sorry. It’s just–I’m worried, sir,” she apologized.

“Just Dev, please.”

“Dev. What if something happened to Ms. Zhou? What if the mob got to her before she could arrive? If this gets out they might think that—”

“It won’t. We’re gonna find her. I’m sure it’s something silly; stuck in traffic or… Did we check the tunnels?”


“Send the drones– my drones, quietly. Tell the rest to keep scouring the streets. If anyone asks, she’s already at the meeting with us. I’ll keep talking to the investors, keep them comfortable,” he signed.

“Yes, sir- Dev,” said Sandra, already starting to move out.

“And Sandra?”


“Thank you.” Dev meant it; for everything.

“Of course,” she replied, almost blushing, and scurried away.



The crowd:

“I can’t keep living like this…”

Field Test: Using the HoloLens in Social Settings – Veronica Ortiz

As a filmmaker, I try to understand how the experience of consuming content affects us and how different it is for every individual. I’ve observed that society is moving more and more into solitary ways of consumption: we stream movies and series alone in our rooms – forgoing the communal experience. Water cooler conversations are becoming infrequent as the threat of spoilers looms ever present. Still, we are social creatures and we still gather in some ways to comment and share things together; even if it’s on a message board. This is why Game of Thrones viewing parties still exist.

Regarding the HoloLens, it seems to me like a better alternative to the completely solitary experience of virtual reality. Still, I wondered how effective it would be to carry on with outside conversations and outside stimuli while at the same time wearing the HoloLens. While everyone was very excited about the HoloLens coming into class, I had one burning comment on the back of my mind when someone was pinching the air during class: “That’s awkward.”

Would it be difficult to try to follow the outside world and what’s happening in the HoloLens? Is it uncomfortable for the person wearing the HoloLens, for the people around them or for everyone? I hypothesized that it would be almost impossible to carry out a simple task with other people while wearing the HoloLens.

To test this out, I had one person wearing the HoloLens and playing the “HoloTour” app while at the same time trying to finish a game of UNO with two other people. To my surprise, the person wearing the HoloLens actually won the game and she was forming part of our conversation while at the same time commenting things like “There’s a huge globe in front of your face right now.”


I had everyone who did the test fill out a simple survey that explained how they felt throughout the test and was surprised by the results. Even though I didn’t gather any discomfort in the conversation by observation, the comments on the survey indicated that they both felt very awkward maintaining a conversation with the HoloLens.

Survey Answers:

HoloLens – Google Forms

HoloLens2 – Google Forms

This is obviously a very amateurish experiment, but it would be interesting to have this tested out with larger groups and to gather more data on the issue. It seems, to me, a waste to funnel so much money into a technology that people might shun away from because of its awkward and isolated design.


Field Test: The Future of the Fashion industry in virtual reality {Maxine WIlliams}

Introduction: It took me forever to come up with an idea for a field test. I knew I wanted to do something that people could imagine using in the future. One night when I was thinking of field test topics, I thought about the movie Clueless in the scene where Cher was choosing her outfits from her computer. I remember that feature being really cool to me when I first saw it. That’s  when I came up with the idea of 3D scanning yourself to make a virtual closet.

The Issue: Is it possible to make your own virtual closet through 3D scanning, and will this be of good use for the future?


iSense 3D Scanner for Apple iPad Air  $399/499

The Hypothesis:

I can create my own personal virtual closet by scanning myself in my favorite outfits.


Jamaya 3D scans

The results turned out pretty well! I got a friend, Jamaya Powell, to help me scan myself, but she was having trouble understanding how to make a successful full body scan because she was a little technologically challenged, so I ended up using her as my model. In the collage you’ll see 6 screenshots of the scans I made. The scans are also on the Journavation Sketchfab account if you’d like to see them in full 3D action.

If you click that link, it will take you to my favorite scan of the six. It’s my favorite because we were trying to play with poses to see how the scanner would react and it turned out pretty well!

The scans turned out pretty well considering it was all done on an iPad! The only drawbacks to this becoming an actual thing that people do is that it’s very time consuming. For me to create 6 seemingly clear scans of my friend, it took me about an 1 hr and 1/2 since I had to start over several times.It would’ve been way too time consuming to scan Jamaya’s entire wardrobe so I just had her pull out her favorite dresses.

In the future I think this could be a really cool tool to use for fashion enthusiasts who are indecisive on what clothes to wear. I imagine this technology as a “smart closet” where you only have to scan yourself once in your under clothes. From that point, I envision clothes to have a feature where you can download the pieces to your closet, and dress your 3D model. [I wrote about this in more detail in my vision post!]