Assignment 5 – Jenkins
May 12, 2017
For Assignment 5 I looked at USA Today’s Ghost Factories Case Study. The Journalists used XRF scanners such as this to measure levels of things such as arsenic and lead and found “dangerous levels” of lead in 21 neighborhoods across 13 states.
A sensor I found on Sparkfun was an IR Photo Interruptor. Basically, the sensor emits an infrared beam between two uprights. When the beam is broken, the sensor knows when something passed through. An application of this would be a punch card reader, allowing only certain punchcard patterns to be let through security like an ID Card reader or tallying how many punches people got on their card at the end of an event. The latter would be really useful for measuring engagement at events such as food truck festivals or beer / wine tastings!
Jenkins – Field Test
May 12, 2017
For my field test, I expanded my Photography ChatBot. The Bot’s purpose is to help new photographers learn about some of the most famous photographers and photography terms, provide resources for further learning, and provide links to photography news websites.
The Photo Bot is available here: m.me/1675839529096166
The Photo Bot was created using ChatFuel, a free service that pairs with Facebook pages to allow users to create easy-to-use chatbots.
“AI Rules” allow you to set keywords or phrases.
When the bot picks up on a keyword it sends back either plain text, or a Block.
A block is a series of commands or Plug-Ins that runs and allows things such as user input, text fields, and videos. The Learning Resources block displays a small introductory text followed by a list of links to different resources, triggered by the words “Resources”, “Learn”, “Learning”, “Education” and “Information” or any combination of those words.
I think the Photo Bot serves it’s function on a simple level. There are countless photographers that the Photo Bot could list off, as well as endless resources on the internet and in print for photography information. I think the next steps would be to code an equivalent exposure calculator, for an on-the-go resource.
Jenkins – Vision Paper
May 12, 2017
My glasses fell off my bed as my alarm rang and vibrated my makeshift night table. I had a real one shipped from IKEA, but I didn’t have a hammer to actually build it. I reached for my glasses and put them on, my eyes watering from the slow start-up screen animation. The electroencephalography, or EEG, Arduino sensors implemented in my glasses began to heat up. I remembered those old bone conduction headphones and smirked, the weirdest things end up making history.
The display came to life and I clicked away the unimportant messages like how much sleep I got, my dream memory video, and my fitness goal for the day. You can use the glasses in EEG only mode, or vision-tracking mode combined with EEG. I chose the latter, because it worked much more like the traditional computer I was familiar with and had less bugs.
The display now only held the most important information: my job for the day and the time, 9:45 A.M., September 26th, 2022. I looked towards the RFID codes on my wall, still having to be manually labeled with sticky notes. The shower turned on and the lights in my room moved to the second dimmer setting, giving me time to get my bearings. I checked to make sure I had plugged in my battery the night before, because there’s still no batteries that last more than a work day. It was plugged in, so I made my way to the bathroom to shower.
Walking outside, my self-driving car pulled out of the garage. I opened the back door and slid inside, removing my backpack on the way and placing it next to me. The door closed and locked as the car got to 9 MPH. I always wondered why they chose that number, but it makes sense if you don’t think about it too hard, so it stuck.
My location for the day appeared on the screen in-front of me, Snæfellsnes National Park in Iceland. The client was a local clothing company, a hip and trendy start-up marketed towards 20-somethings. Hopefully they wouldn’t be there. I had the option to enter 360° mode, transferring from the 2D screen to a virtual experience in the location. I clicked it. I haven’t had enough time in VR recently. The EEG sent electric signals back into my head to simulate warmth and wind. It was fun but it used too much battery to not do it plugged in, I turned the mode off.
I met my assistants at the airport, the face recognition software getting me quickly through security. The TSA were a shell of their former organization, but it might be better that way. All the lights and light modifiers were there, I had my camera, and we met the clothes and model there.
The flight took an hour and a half, enough time to take a mid-morning nap and mentally prepare for the day. The art director and model were at the airport waiting for us with the clothing options for the day. The art director hired by me, and the model was probably some friend of the start-up owner who couldn’t find a job after University. We took two cars to the park, and my assistants started getting lights and clothes ready.
I liked to do a camera test before every shoot, to make sure I didn’t look dumb in front of anyone. I was still new to this camera and I wanted a quick refresher. The DJI Inspire 5s Mini came to life and unfolded out of its case, the first-person view appearing on my glasses display. The EEG sensors became even warmer as they turned on their fans. I still haven’t installed liquid cooling. I need to remember to make a note.
Up. Down. Left. Right. Pan. Follow mode. Stop. Return. Stop. Orbit. Stop. Down. Everything seemed in order. The crew was ready, it was time to put it in action, everybody was watching.
It was rather easy, in hindsight. The system showed me the best angle to account for sunlight and my lights. My drone camera flied in the preset pattern, before showing me a grid of the images and prompted me for different angles and gestures. An edited version was sent to the model’s HUD. I made a preset that just increased the contrast and made the colors vibrant yet faded. It was the closest thing I could get to making it look like Instagram. It usually encouraged them to continue without speaking to me, I just didn’t like small talk.
The clothing start-up only asked for five images, and my memory counter showed me that I had 89 to choose from, so I figured I’d do 11 more photos and call it a day. None of the photos were bad, the auto-stabilizer and Auto-Intelligent-focus made sure of that, but some were definitely better than others.
The airplane ride back was the time to sort and choose the photos, and edit if I had time. The glasses display made it easy to figure out which ones were keepers and which could go away. The clothing company only asked for three, but if I sent five, then they would think that they had options to choose from, and that they were the ones selecting the photos. I chose six and would delete one later. I uploaded a preset, imported them and retouched in Photoshop and just had enough time to export them before stepping off the plane. I would have to select the one to delete before we got to the office, five was okay, but six made them expect more every time.
I got to the office, and uploaded the photos to the proximity-based network. Unfortunately, the proximity system meant I had to be at the office to upload work, a drawback of the faster upload speed. An email popped up on my display. The photo editor received the photos and is going through them. He still hasn’t figured out that I always send them from the room next door. I’m waiting to see how long it’s going to take, considering we don’t allow uploads from outside the proximity net. He’ll edit them, even though I already did, and send them to the clients.
The client will choose the best photo and it will run on their billboard as well as their magazine update for that day. Nobody prints magazines or newspapers anymore, everyone just uses their E-MAGs, tablet like devices that mimic the page flipping fun of skimming through a magazine, but with AMOLED screens. Users can subscribe to magazines that suit their interests, or have the E-MAG create a personalized zine for one’s daily dose of news and fashion. Of course next to each photo is a code that allows users to see what the outfit would look like on themselves and have the option to ship it with next day delivery, which is standard.
I checked the time, it was 5:30 p.m., just in time to have a small break before I begin my second adventure of the day. I took the car home and had a quick early dinner consisting of beefy mac and cheese and asparagus. I plugged my computer into the dongle that attaches to my glasses, the EEG sensors and transmitters fired into desktop mode. I launched the application, “Silent Glade.” It was the newest MMORPG created by the gaming division of Google who, in 2020, acquired most of Silicon Valley. My second job was as social media representative and in-game photographer for Google’s ‘Silent Glade’ team. The game was still in beta testing, but I had free access for 3 months already, to create promotional materials.
The fantasy game world sprang to life in-front of me. The EEG sensors combined with desktop VR mode allowed the game world to appear almost as real as real life, which was quite the feat in modern technology. I checked my log of Google+ posts and cross-referenced it with the list of game regions. I hadn’t been to the Forest of Xak yet, I set off in that direction after materializing my horse from the menu. It would take about 5 in-game days to get there, which translates to about 5 minutes in real life, though it will still feel like 5 days to players.
I arrived at the site in the Forest of Xak. I had picked out a location on the map that overlooked most of the forest, a nice sunset would peek over the treetops just past the river that ran through the middle of the zone. Sunset came, I took the picture and prepared the Google+ post.
“The Forest of Xak, just one of the playable zones coming to players in Silent Glade. Will you conquer the forest?”
I posted and auto-shared the photo across my social accounts. I took a few more photos on my way back down the path, pretty pictures to send with my replies to questions or trolls in the comments section.
I replied to comments for 30 minutes and then logged off. It was Monday, and that meant my favorite show was being aired on Google Channel 5 at 8:30. I would have plenty of time to continue my adventures in Silent Glade later that night when my friends got home from work. I unplugged the dongle from my glasses, and turned to Channel 5 just in time for the opening credits.
Jakubowski-Vision Paper- FAN VR
May 12, 2017
The Sports Spectrum: Ever so Encapsulating
It’s the year 2050 and if you think you could pick simply watch sports on TV or on your phone, you’re in store for a treat. No one has been watching sports on a TV or a phone for a while now, as that “trend” has faded away. If you told a child now a days that you “watched” sports games on your couch in front of a TV, or even from social media platforms, they would look at you with wide eyes. Now, Everyone uses all interactive sport technology now to be immersed within a sporting event.
The first model came out a 10 years ago, and many were unsure if it was worth it to make the investment in one. ESPN was still working out some technical and logistical problems with their model, which allowed people to virtual watch games and pick the location of where they wanted to be. The total cost of $5,700 to install the system on both home and mobile platforms seems extravagant, especially with the seating option the system gives you to enhance game day experience instead of a typical stadium view. It all seemed excessive to consumers, including myself. However, more and more people were purchasing them as the quality and experience began to spread like wildfire about how much this changed the viewing of sports. So, like the rest of the world, I decided to dive in and see what I was missing.
I was having a conversation with my Dad, where I explained to him everything about the technology. How It was selected by ESPN as a prototype to immerse people not at the games in person. People would put on VR headsets and headphones and then essentially “assigned a seat” throughout the stadium, where they could feel the game atmosphere and have feeling of really be there, even if they were hundreds of miles away. The great feature too was that at any moment, the user could instantly flip to “broadcast view” which is standard to what people used to watch on TV. That mode would allow for replays and commentary. The system has all the capabilities of making it feel like the viewer was at the game, with crowd and games noise, without dealing with the hassle of travel, ticket prices, or even the weather. The goal of the “FAN VR” the common name associated to all the systems, is to get peoples emotional state raised enough, to stimulate more of a reaction and response to sporting events and cause this piece of entertainment to be known throughout the rest of the world. Someday, the systems are hopefully they can pair the technology of buying stadium food to the system, yet that seems far-fetched.
While working for a local morning news station, I was assigned a story to how this could be implemented in terms of showcasing news stories, I chatted with my tech industry sources to get a better idea of the story and if this technology had been explored. Many have told me the feasibility problem is that the camera and audio systems that are used in stadiums as part of the FAN VR systems take an extremely long time to set up and need certain operators to make sure they are properly function at all times. The problem with news stories is they often need to be reported quickly, something that does not bode well with the FAN VR systems. Think of trying to report on the war in Africa with these systems. It would be tough to report with a system that could take hours to set up and optimize, all for a small 2 minute report.
But I have a breakthrough during that same day. While our traffic drone returns to the station after analyzing morning traffic patterns, it dawns on me that these systems could be fitted to drones. The Fan VR systems cameras and mics are all stationary, as it is needed for users to lock into the seat per each stadium. Under my idea, the drone would fly to the destination under my control (or even the fan) , collect the right camera angles and settings and then initialize floating status for a long period of time. Then, users could choose the angle of view they want. They can even can slightly rotate the drone for better VR experience. But since it looks unlikely I could quit my job and run off and try to develop this myself, I resign back to my holographic desk. I do hit my sources up to ask about drone technology merging with Fan VR, but they tell me the stationary status of the system is the issue, needing drones to be almost completely still while hovering in the air.
Last time when I was reporting at the government digital press conferences, they did indicate drone research package aid was expected to be passed by the house, granted in got through the senate and the 102 senators (Puerto Rico joined in 2030, btw). Security and safety issues about drones are a thing of the past, with minimal drones incidents since the mid 20’s, when the drone industry really took off.
That night, I was settling in with my VR set in my bed. I wanted to catch the end of the Las Vegas Raiders game. The menu popped up, with my standard seat location in the middle of the field, upper deck. It asked me if for $4.50 if I would like to upgrade to a lower level view, but I decline. Was fine with location and was likely to turn on the commentary mode as well. The video flicked on and in a minute I heard the chantings of “F the Rams” shouting in my ear. I fell asleep with a vendor passing by my location. The system includes the auto sleep mode, automatically shutting down when the viewer’s eyes are closed for a certain amount of time.
The audio system has sensors in every part of the stadium, along with the specific area where the viewer decides to “sit.” The system also include audio commands, recognizing my instructions accurately and only recognize my voice when I ask to change view or go back to certain views.
Let’s highlight how this technology was installed in stadiums . There are roughly 30 cameras in major four sports locations, including some major universities for college football and basketball. Most stadiums have a typical 8-10 seating options. There are usually 2-3 seating upgrade locations, where users could pay an upgrade price to be able to sit closer to the action. Price wise, the system is upwards of $5k and the annual programming fee is about $150 per sports league or the Fan VR package of $850 for all 4 major sports and premium college games.
I recall a conversation with a friend in the sports entertainment industry, who I use often as a source, about how he thought this type of technology would blow up. This was back in 2027 and I thought it was only speculative talking. This man was spot on with his call nearly 25 years ago. There’s a reason I’m still friendly with him, especially when getting tips about technology trends.
I can still remember the days when the tablet was the move when watching sports because of multiple camera angles to choose from. I can still remember the passion of viewers when “3D Slam” came out, failing to live up to the hype of watching 3D sports.
I’m thankful for my job working for a media company, specifically morning news. But sometimes I wonder if I’ve meant to do something else, like my drone and Fan VR combination idea. Anyone want to spot me a few million to start a prototype?
Jakubowski-Field Test- Airport Bot
May 12, 2017
For my field test, I created a chat bot that acts as a “FAQ bot” for the Syracuse Airport. I had worked on this idea during an early assignment during our class but thought the idea could be expanded. The goal of this messenger bot is to quickly inform patrons about the Syracuse FAQ, especially when sitting on planes and waiting to leave the plane and go to baggage terminal.
Airports can be overwhelming to people especially those who travel infrequently. Some of the main things people want to know about a new airport they are traveling to is gain information, restraint locations, and baggage claim and transportation. In doing some preliminary research, I found that many airports, including some big ones on the east coast, don’t have a chat bots that lets users quickly access information, instead of waiting to ask an airport or airlines employee. I decided to make a test bot for the Syracuse airport and provided all the relevant information.
I used chatfuel.com to create the Chabot. This website has a simple interface and allows for me to input answers, links and images, all necessary items for the airport bot. I wanted to use the base of the bot I already used for a prior NEW 300 assignment, but I instead had to make a private Facebook group page in order to share the link more effectively. (Since a private Facebook group link can link directly to the messenger, as opposed to the group itself)
Another big feature I added in included “blocks” that are made to give the users of good questions to ask, based on the information available to the bot. I set the bot up into three blocks to pick from, with ‘Gate information”, “About the airport”, “Transportation” as the main headers where people could file their questions.
I initiated the bot back in mid-April because I knew I would need multiple users at the Syracuse airport.
I sent links to friends and family who I knew would be flying through or to and from the Syracuse airport. In total, I got 26 different users over the span of 1 month. I was always texting people I knew were flying to/from Syracuse to try my idea! A majority of the questions (asked by the users) were focused on gate number and plane departure time, but there were a few general questions as well.
One noticeable trend I found via early user feedback was they did not like the “menu main “function of the 3 main blocks to pick from. They preferred to ask the bot a question and then receive an answer based on the word association. After dissembling the menu and instead just having a typical “welcome” message, users generally liked this opening interaction where they could ask anything to the bot.
One item that was very instrumental was the ability to embed images in the bot answers. One user asked for a terminal map of all restaurants, and the sent a message contain the terminal map for the user to browse. An idea for bigger airports would be to use direction to a certain airport store or restaurants, especially if the user can specify the gate. It saves the traveler time of trying to find a certain spot to eat or meetup with fellow travelers.
Another item I noticed that was asked commonly is if they could pay for parking via their smartphone. That is a great feature addition to add, where the parking payment method could be easily linked to the user so they could pay for parking when arriving home (right from the landing plane) and be on their way without having to wait to pay.
Below is the overall data from user responses.
26 overall users
94 total messages sent from users
91% of time Bot responded with answer. Others sent default “I don’t understand..” answer
4 times obscenities were used by users
45% of answers involved gate information
28% mentioned either “transportation, parking, taxi or bus”
26% asked about restaurants
Only 34% interacted with the block menu before I made the change to go to open menu message.
9% of user messages included the words “dumb, stupid, not what I said.” LOL
I do believe this a great idea that airports and transportation companies should look into. A load of data is necessary to do this and make sure customers are given accurate and quick information, emphasis on accurate.
The key thing is to run the bot through Facebook, an establish platform that many users already have access to. Users should be able to quickly find the bot on FB messenger and then type in their question. With nearly 50 different answers prepared for this bot, imagine how advanced and in-depth a real bot would be for a major airport with LaGuardia. With the proper resources, a bot could answer thousands of customer’s questions, everything imaginable a traveler could ask. One potential negative thing is the bot will to have to be updated every day due to activity surrounding the airport or updates that can not use automation.
Overall, bots help bot could help save plenty of time and resources for airports, while helping out passengers tremendously by supplying a plethora of on the go information, right in the fingertips of every traveler.
Jakubowski- Assignment 5 – Sensors
May 12, 2017
While in class, there were two sensors ideas that we came up with. One was a sound sensor, which we said could be used to track certain wildlife in jungles or forests. Some animals are tough to track or find, especially at night. The right sound sensor that could sophisticatedly track an animals sound could provide invaluable to the science and wildlife community.
Another use of the sensor could be for military purposes, to track enemy activity and noise. Thought, I have a feeling the type of technology is already in use.
The particular sensor on on Spark fun was more expensive than the rest, but the sound one would be extremely expensive based on its major capabilities.
Edwards- Field Test
May 12, 2017
I chose a complicated issue for the FaceBook ChatBot assignment, so I am going to expand upon my bot for the field test. I want the bot to give users an understanding of the current political climate in the EU. With Brexit and the buzz around the French election, populism has become a term that has swirled around in the news media. The purpose of my bot is to adequately explain what populism means and why is it spreading.
To create my bot, I decided the best thing to do is start with a timeline. To explain populism, you have to start with events back in 2008. Then there’s a discussion of Brexit, Trump’s election and the French election that needs to occur to explain the full picture. Outlining this timeline of events helped me organize my thoughts.
I created all of my blocks concerning the 2008 economic crisis first. Then, I went to the AI section and thought of keywords that would help lead a user to those blocks. I did the same thing with each successive topic.
The first test was not good at all. I was clear that I had a through line of populism, but users who were learning about this topic for the first time would most likely have a difficult time navigating through my bot. Since people who have limited knowledge on populism is my intended audience, I had to think through a way to make it more user-friendly.
After tweaking a few things here and there with the AI, I ran a few more tests and finally discovered the solution was to lead the user more directly. Therefore, I directed users to each new topic/ block in the previous block. For example, my block about the winner of the French election says who wins. At the end of that, it says to ask the bot how much Macron won by. By doing this, a user will have a much easier time navigating through my topic.
(i.e.: Macron won the election! Ask me how much he won by!)
After I made this change, the bot still did not recognize some words, so I made a few tweaks. Now, I think it is pretty successful. However, it is not as intelligent as I would like it to be. It has a very prescriptive path from one block to the next as it takes you through the topic. I would have liked it to be more ‘intelligent’.
Overall, I think this chat bot is a simple tool that a user can easily use to understand a daunting topic. It is a singular source that houses all of the facts someone would find after conducting several internet searches. I think this technology would be great for news organizations who want to create a source of information on a given topic. Rather than having to read multiple articles over a long period of time on any given topic, the user could go to the bot and get all of the essential information to understand a topic. If the user wants more detail the bot could link to individual articles/ videos the organization has written/ produced.
For example, the bot would work really well for a news organization to explain all of the facts associated with the Trump/ Russia scandal. There are hundreds of articles associated with that topic that a user would have to sift through to understand the whole picture. A bot could help a user avoid losing important facts that may have gotten lost in the shuffle because they were written/ produced several months ago. Finding old articles sometimes requires an extensive search, so a bot would help users avoid that frustration. With a chat bot, all of the information would be in one place.
I think a bot would work very well for complicated issues. I hope to see news organizations utilize this technology in the future.
Link to MY bot: https://www.facebook.com/Kristen-233631247116392/
Edwards- Vision Paper
May 11, 2017
I woke up with the sun again today. It rose at 5:45 AM. As it peeked over the horizon I could hear the sounds of the forest and smell the scent of fresh dew on the grass. But I am very far from any sort of trees. Deep in the crowded sidewalks of Manhattan is where I spend my days as a news reporter.
The forest I wake up to is like everything else in 2030… digital. Instead of the dry wall that covered the walls of my home a decade ago, all of my walls are screens that project anything I program into them. Now, it is the forest. Smells of my choosing are pumped throughout my apartment throughout the day. Lately, it has been Shimmering Pine from Yankee Candle. You know, to go with the dewey, forest theme. Last month it was the volcano candle from Anthropologie to go with my Hawaiian beach theme.
As I sat up in bed I took note of the pitter patter of rain against my window. I asked Alexa, yes she’s still here, to give me an hourly weather forecast for the day. She has become incredibly concise over the years. She knows exactly how I like her to answer: quickly and without unnecessary filler words. She also knows my schedule. I’ll leave my apartment at 7:45AM. I’ll be in morning meetings until 10AM. Then I’ll be out reporting until 3:30PM. I’ll leave work for my commute home at 7:00PM. Taking this into account, she says, “7:45- 8:45 rain. 10-3:30, rain then clear at 2. 7:00- 8:00 clear, but cold. A light rain jacket will do for the day.” I thank her as she scrolls through my online closet interface for a rain jacket. I’ve been browsing the rain jacket selection at Target.com lately, so she rented one that I looked at the longest and poof! It was teleported to my closet and ready for pick up. A drone lifts the rain jacket out of the portal, flies across my apartment and sets it on a stool next to the front door.
I almost ran into the drone yesterday. It’s so quiet and small that I forget it’s there and have a hard time avoiding it mid-flight. It’s all because Alexa is so attentive to my needs that sometimes I even forget what I’ve asked her… and her drone… to do. Amazon is working on the drone software to make collisions less frequent. The drone usually doesn’t detect a human until it’s a couple feet away. Thankfully it hasn’t run into me yet. Some of my friends have had it hit them, but it’s so small that it really doesn’t hurt them. A little cut is all. Maybe if I moved into an apartment with taller ceilings that would help.
Next, I told Alexa that I wanted to be comfortable, but trendy for the day. After a few seconds, she pulls up an array of options on my closet interface. I scrolled through and found a couple pieces that I liked. Alexa has gotten very creative with outfits, but she hasn’t grasped my style quite yet. I had to mix and match a blouse with pants and shoes from a couple different outfits. Once I selected “deliver”… poof! My outfit lay right in front of me and ready to be worn.
The jewelry and makeup package was out of my price range so I have to sift through my physical collection on the closet shelves. When I walked out of my kitchen, Jeeves was already making breakfast. Jeeves is a metallic gold color and doesn’t come close to resembling a human. He has arms, a head and a torso, but no legs or skin or anything else. I’ve seen some that look identical to humans, but that really freaks me out. Jeeves is perfect. And he knows how to fry a mean egg.
By the time I sit down to my kitchen island, my breakfast is plated and ready to be eaten. Jeeves also helps to keep my apartment clean. Sometimes I have buyer’s remorse because Jeeves is just an excuse for me to be lazy. All I really need is Alexa and her drone. BUT I splurged for the $10,000 robot and here we are. As I eat my breakfast, my ‘pump up’ playlist is already playing. Thank, Alexa! Five minutes before I leave, she scrolls through the news headlines to help me prepare for my work day. As soon as she’s done, I step out of my window into my car. Yes, there are flying cars.
The transition from ground-based cars to flying was very strenuous. The government couldn’t get it together with the regulations and laws. They ended up deciding on a 2-year transition period. Everyone in the U.S. had two years to trade their cars in for a flying car. As you can imagine, those two years were a disaster. More news for me to report on, though!
As I hop in, I am quickly vaulted to 50 feet above the height of the Empire State Building. Up here is where all the traffic is now. We don’t have to worry about too many planes these days though. Most people teleport. I’m not brave enough to try it, but some swear by it. I’ll stick to commercial flights for now.
It’s a quick 5-minute commute to work. Ten years ago it would have been an hour by car. With the invention of flying, automated cars, however, the word ‘traffic’ has become obsolete. My car drops me off on the 15th floor of my office building and quickly descends to its underground parking spot.
I walked into the newsroom and made it over to my station. My station consists of a floating interface and a stool. I like to do most of my work standing because it keeps me engaged. Everyone else in the newsroom feels the same. The walls of the building are the same as my apartment. They project our competitors’ news channels and they also house our server. My floating interface is where I do all of my editing. It is all touch screen and most of the tools are enabled with voice commands. I scroll through my FaceBook to see what is trending on social media. Social media is now limited to FaceBook these days. Every time a new platform becomes popular, FaceBook buys it. It has become much more trendy in the past few years. At first, people were upset about its propensity to buy everything in sight. Now, they have managed to become the best platform you could imagine. They have taken pieces of every platform they’ve taken over and incorporated them into their sleek, user-friendly design.
Next, I head to my morning meeting where my story was assigned. I video chat with my sources to get a better idea of the story and what to expect for the day. Soon, I’m hopping out the window into my car and heading off to the story. When I get there, I pull out my cell phone and begin to record b-roll. Cell phone cameras are now the best in the industry. Lugging around giant cameras are a thing of the past. I did bring along a small tripod to help me stabilize my images, though. As soon as I’m finished, I’m recording live videos and teases on FaceBook for my thousands of viewers. My interviews are quick and easy and I am ready to edit. I brought my interface with me and put the package together within 30 minutes. At 4PM sharp I was live fronting my package for everyone in the NYC market.
Every channel in the market now tunes into our newscast at 4PM every day. Every TV in the US is automatically turned on and tuned to their market’s TV stations. It’s been this way since the Era of Fake News and Misinformation. That was a dark period for our country. The spreading of fake news was becoming so popular that it was indistinguishable from real news. It launched us into war with citizens rioting in the streets. Fake news stories about Congressmen committing horrific crimes and the President engaging in illegal activities was the absolute tipping point. It took a long time for the country to get back on track, but in an effort to never allow fake news to be spread any longer, the President signed an executive order that allowed news channels in every market to take over broadcasting from 4-6PM.
In an effort to still keep the news organizations accountable, each TV owner can decide which TV station they would like to be tuned into in their market. My station is one of four in the area. We have half of the city tuning into us every day. That is the maximum viewership we are able to have under the new FCC rules. For now, this system is keeping the peace and as a credible journalist, I am making sure that we are reporting the truth fully. I’m not sure how I feel about this new normal. We will see if it remains positive in its purpose to educate the public.
After the newscast from 4-5, the national news takes over. I head back home in my flying car to meet up with Jeeves. I asked him to have a salmon dish waiting for me when I get home. I can’t wait!
Vision Paper Yi Zhang
May 11, 2017
A Day of My Life in 2030
My bed starts to vibrate promptly at 8 in the morning to wake me up. I want to sleep more, and the weight sensor on the bed feels I’m not going to get up. So my intelligent bed tilts up to make me slip down to the floor! Every time it does this to me makes me want to go back to the past when bed was just a bed.
As soon as I get up from the floor, the water in bathroom starts to run and the audio system in my room starts to play morning news. I walk to the bathroom to get a shower.
“Today is the tenth day of the World War 3. Our hackers hacked into Russian military intelligence systems and successfully destroyed it.”
War is no longer about people fighting with bodies in this intelligent times. It’s all about occupying technology and intelligence. When we need to fight face-to-face in the battle file. We use robots and drones instead of real human beings. However, I don’t want to listen to the boring war news.
“Play some classical music,” I say.
The audio system has sensors in every room of my apartment, so I can give it instructions from any corner of my apartment. It’s able to recognize my instructions accurately and only recognize my voice right now. I can also add other people’s voice to be recognized if I want.
After shower, I put on my smart contact lenses. It not only serves as a pair of near-sighted eyeglasses, but also a pair of VR glasses, screens of my phone, TV, and computer. By the way, we don’t really have something like the smartphone 20 years ago. What we have now that serves similar but more functions than smartphone is called personal smart system. It is consisted of several wearable devices and embedded chips in our bodies. People choose different wearable devices and embedded chips based on their own preferences and need. My smart contact lensis one part of my personal smart system. It projects the operation screen of the system in front of me in the air or on a surface. It recognizes my gestures as instructions. I can also use my voices to instruct my smart system through the smart necklace I wear.
After having breakfast, I walk into my work room to start working. I have a portrait shooting in the morning and an architecture assignment in the afternoon. Neither of them need me to be there. As a photographer and re-toucher in 2030, I have a group of robot assistants. I instruct my three robot assistants in the studio to setup the scene and lights. One of the robot has a camera in his body and I can see the live view of it on my computer. After communicating with my clients through the live video system, I press the shutter button on my computer and the robot starts to move a around to take photos. Every photo it takes record all information on the scene. So, for every photo it takes, it needs to move around the subject for 360 degrees. In the post production phase, it’s for the re-toucher to decide what to include, from which angle to shoot, the distance from camera to subject, how much depth of field to show, etc. Photographer, or say the robot assistant is only a visual information recorder. It’s for re-toucher to make the aesthetic decision.
After the assistant robot taking the photo, I receive the file at once and start retouching. Although my client hasn’t see my final work, he seems to be very satisfied with our service and rates us 5 stars on our social media page. Nowadays, the rating on social media is vital to everyone on the world. It serves like personal credit system and the symbol of your reputation. It affects every aspect of your life, such as your job application, house renting, loan application, etc. It makes sense since people have more social interaction online than face-to-face right now. It has a complicated algorism for calculating a fair rating for every person based on all the ratings they get from others. Apparently, it’s even more important for business to get higher rating. The higher rating I get, the more opportunities I have for new clients and jobs.
Time for lunch. I quickly look over my favorite Japanese restaurant’s website and order online. 10 minutes later, a delivery drone stops outside my apartment window on the 20th floor. Delivery drones are widely used by restaurants. They are very effective and never get lost.
Suddenly, my smart necklace starts to vibrant. My friend Jessica is calling me to have a video chatting. But I don’t want to talk to her right now. Recently, she called me every day to complain about her husband. I’ve been tired of hearing that. So I choose to use my virtual chat robot. It forms my image on the screen and imitates my behaviors. This chat robot learns my tone and the way I speak from me whenever I talks. Even my friend will not notice that it’s only a virtual image behind the screen. Sometimes when I feel boring, I will chat with the virtual robot to train her and to release my pressure. She is not only a robot, but also my best friend who knows all my secrets. Fortunately, she is smart enough to know what she can say and what she cannot say.
After lunch, I go back to work. Waiting till 4 in the afternoon, the best time for shooting architecture today, I send out my drone. It flies to the destination under my control, collects data of the building and sends the data back to my computer right away. Based on the data, the software on my computer make a realistic 3D model of the building. Then, I choose the angle of view I want. I even can retouch the photo to make it feel like it’s taken at night with all lights in the building on. But since it looks a little bit unreal, I usually prefer to keep the natural lighting. Almost all the things in a photo can be manipulated now, except for the light on the scene. You can change the brightness and contrast of light. But the direction and texture of the light is what it is on the scene. Maybe one day in the future, you can also change those during post-production easily and make it very vivid. But will our world be too fake at that time?
Finishing all the work, I stick the muscle bandages to my body and go to work out. The muscle bandages are made up of sensors that are designed to detect the tension of muscle. They are connected to a smart fitness system which can provide workout advice to me based on the data collected by the muscle bandages. With the fitness system, people are able to do workout scientifically. The occupation of personal fitness trainer has disappeared for 10 years.
At night, I want to entertain myself with a VR live film. I lay down in bed and say “play the One Day in Rome in VR live mood”. Then the contact lens creates a VR vision that make me feels like in the scene. I travel to Rome with characters in the films and witness their stories just beside. When watching a VR live film, I’m not only a viewer, but also a support character in the film. At some point in the film, I’m able to influence the development of the story, which makes me really feels like a part of the film. Some films allow more than one person to participate in the story, which is the trendiest way of hanging out with friends right now. But sometimes I’m afraid that one day, when the VR technology is super advanced, I may not be able to tell which is the reality and which is the fake reality.
Assignment 5 – Elena DeLuccia
May 11, 2017
With the SparkFun listed Single Lead Heart Rate Monitor, I thought that maybe it would be interesting to see what different types of music or specific beats do to your heart rate. When people listen to EDM music and go to raves, they get adrenaline rushes. What other genres could this happen in, and how often during a song does your heart rate raise? Is it specifically when the beat drops or is it the buildup, and what brings it back down?
With this specific monitor, it magnifies the heart beats even if there is enormous background noise, which is where I thought this experiment could come in handy.
Another idea was to see when someone with social anxiety has their heart rate elevated to determine a possible medication or coping mechanism plan.