Facebook has been in the news alot over the last few weeks. All because a researcher used the Facebook API to scrape data from Facebook users and their friends and then sold that info to an analytics firm that worked with political campaigns.
Somehow this has morphed in a privacy debate because developers with access to the Facebook API cannot take what they want from Facebook and sell it to anyone they want.
With 2 billion users who gladly tell Facebook their favorite TV shows, movies, clothing brands, how they spend their time, who their friends are, who they voted for, what political movements they support, the color of their eyes, whether they wear glasses and on and on, Facebook has every bit of information needed to figure out what you might want to buy or do.
The fact that Facebook has not fully exploited this (to my knowledge) is somewhat mysterious.
It’s uniquely qualified to, for example, identify the 100 people in Northern Arkansas who want a pack of oversized silver balloons. With this type of information arsenal, Facebook should theoretically be able to obliterate the TV networks and the major newspapers and steal their advertising dollars.
There are ways it can go after Google and Bing, too. Growth for Facebook does not mean getting more users—it has enough. Growth mean finding new ways to target advertising and develop paid services.
So…why all the complaining about privacy? This notion is a mainstream media creation developed as a means to kill the monster. Facebook executives are then in a bind and have to apologize. Otherwise they’d seem uncaring and cavalier about privacy. We have Mark Zuckerberg and COO Sheryl Sandberg saying they are sorry and it won’t happen again, unable to explain why it happened in the first place.
Now Congress is raking Facebook over the coals. Zuckerberg, the visionary of the whole scheme, hemmed and hawed and apologize as best he could on Tuesday before the Senate. He’ll appear before the House today for another opportunity to say something stupid that will dominate the news for a couple of weeks.
There is absolutely zero reason Facebook should be saying anything to Congress. I’ve never heard a good reason to show up. I’d love to hear testimony like this:
REPRESENTATIVE: So Mr. Zuckerberg, do you have any way that people can protect their privacy when using Facebook?
ZUCKERBERG: Protect it? They are the ones putting private information on the site. They are voluntarily doing it. We do not force them. We do not trick them.
REPRESENTATIVE: Yes, but do they know this information can be captured and exploited by a third party?
ZUCKERBERG: Hey, do you think we want a third party taking the data of all these people and exploiting it so they can make money without cutting us in? Hell, no. We already put a stop to that. We established this system for our benefit not theirs.
REPRESENTATIVE: So then it is okay for you to violate users’ privacy?
ZUCKERBERG: Yes, it is. They said so when they signed up. They post the fact that they are five foot two and we tell them that they might want to shop at Macy’s in the petite section. So what? This is why they use the site in the first place – to connect with friends and find useful information. It’s not as if we are planting bugs in their homes, like Amazon.
REPRESENTATIVE: So why do you think we are having this hearing?
ZUCKERBERG: Because the Washington Post and other skittish media companies kept pushing you to. So you might greenlight some stupid legislation to prevent our users from using the site the way they want. The obvious goal is to keep us from making any money. To make yourselves seem like you are helping the consumer. It’s all a load of bull.
This would be an honest exchange that can only be imagined since it will never occur. Big media is indeed behind a lot of the complaining and the reasons are clear, at least to me. Facebook is a threat to their bottom lines.
This latest version of Dell’s XPS 13 has some hard acts to follow, including the 2-in-1 version of the XPS 13 I reviewed last year — which itself followed on from earlier iterations that had all impressed. It’s hard to lead from the front, so what has Dell done to refine what was already a top-notch laptop — apart from moving to 8th generation Intel Core processors?
Dell’s ambition with the XPS 13 has always been to put a powerful laptop inside a small chassis, and to make the resulting hardware light and portable. That has always been achieved, and here the aim is more about refining previous benefits rather than doing anything radically different.
The first version of the XPS 13 was lauded as the world’s smallest 13.3-inch laptop. This was followed by an updated model with an InfinityEdge screen bezel measuring 5mm on the sides, 13mm at the top, and 18mm along the bottom. Later came the XPS 13 2-in-1, which was lighter and slimmer than its predecessors, measuring 302mm by 199mm by 11.6mm, and weighing 1.2kg. The new XPS 13 9370 matches these dimensions almost exactly, the only difference being a starting weight of 1.21kg.
Dell has managed to reduce the screen bezel even further, bringing it to 4mm on the short and top edges. All of this is remarkable, although this design’s one drawback remains: minimal bezels mean that the cameras — for video calling and Windows Hello — have to sit beneath the screen. The resulting unflattering ‘up the nose’ view may be enough to put frequent video-call users off this laptop. That’s a shame given the XPS 13’s many good points.
The machined aluminium chassis is sturdy and tough. I was able to flex the lid a little, but not enough to worry about, and the base is very tough. I’d certainly be happy carrying this laptop in a bag without a protective sleeve.
Top ZDNET Reviews
The brushed silver finish to the lid is reasonably attractive, and is coupled with a hatched finish on the carbon fibre palm rest. It looks very similar to last year’s 2-in-1 model. For those who want something a little more stand-out there is an option with a rose gold lid and a white glass-fibre palm rest.
As ever with Dell laptops there are multiple configurations available. Not all screens are touch-responsive and getting that feature is an upgrade on the entry-level price. At the starting price the screen resolution is 1,920 by 1,080, but this rises to 3,804 by 2,160 in some models — including my touchscreen review unit. Watching video on this screen is an absolute pleasure; the near bezel-free surround is a boon to anyone who places a premium on multimedia.
Moreover, it isn’t necessary to ratchet up the brightness to get the best out of this laptop. I found the default 40 percent setting for working on battery perfectly adequate.
Sound output is great too. Speakers sit on the left and right edges of the base, where they can propel sound outwards to deliver reasonable stereo effects. Listening to music and watching a movie were all satisfying, although there is a little distortion at top volume.
The keyboard is a pleasure to use. The keys are large, and very well sprung. There’s barely any click when they are pressed, which means working in a quiet environment won’t annoy other people. The trackpad is wide to match the screen’s aspect ratio, and is smooth under the fingers and responsive. There’s a two-level backlight that’s easily managed on a Fn key.
The power key looks a little strange: it’s an unmarked circle sitting top right of the keyboard that incorporates a fingerprint sensor. It’s an unusual setup, but it works well enough, and is a great way to minimise visual clutter. A similarly minimal approach has been taken with the battery charge light on the chassis; it sits, glowing white while the laptop is charging, on the front centre edge of the chassis, unobtrusive but useful.
In another neat touch, there’s a battery power indicator on the left edge of the chassis. Press a tiny button with your fingernail and up to five lights illuminate to let you know how much charge remains. If only all laptops had such a user-friendly feature.
Minimalism also extends to the laptop’s connections: there are two USB-C ports with Thunderbolt and one with PowerShare, plus an audio jack and a MicroSD card reader.
Preconfigured versions of this laptop all run either Windows 10 Home or Ubuntu Linux, on Intel’s 8th generation Core i5-8250 and Core i7-8550U processors. You can have either 4GB, 8GB, or 16GB of RAM, with SSDs ranging between 256GB, 512GB, and 1TB. My review configuration married a Core i7-8550U CPU with 16GB of RAM, a 3840-by-2160-pixel touchscreen, a 512GB SSD, and Windows Home, and its list price is £1,699 (inc. VAT; £1,415.83 ex. VAT).
Battery life is very good. In one sample session I used the laptop in everyday conditions — writing, browsing, and streaming — for five hours and depleted the battery to 56 percent. This was on its recommended battery power mode, tweaked so that the screen never went off, and with the screen brightness set to 40 percent. On this basis, with my real-world usage pattern, all day computing on battery power is certainly a possibility.
The power brick is relatively small and light, so carrying it is not too much of an additional weight burden.
The latest incarnation of the Dell XPS 13 is a very impressive laptop indeed. I’d go as far as to say it is currently the best small-format laptop I’ve seen. The near bezel-free screen is sharp, bright and a pleasure to view, and sound output impresses too. The keyboard is beautifully weighted, and battery life impresses.
The only thing that would put me off is the camera location: I’d sacrifice some upper screen bezel for it to be moved there.
When Apple introduced its first personal digital assistant (PDA), the Newton, in 1992, it was clear from the start that it was not long for this world.
As a concept, the Newton was a head-turner, but its design and functions were weak, to say the least.
Its biggest problem was the deeply flawed handwriting-recognition technology. The mobile processors available at the time were incapable of handling this task with any level of accuracy or precision, while the software was poorly executed.
I remember flying to Chicago for the launch of the Newton at the request of then-Apple CEO John Sculley, who drove this project from the beginning. But during the onstage demo, the handwriting recognition failed repeatedly. We were told it was an early version of the software, but I had a strong sense that Apple was overpromising.
During the early years of the Newton, Palm Computing founder Jeff Hawkins began working on his own version of a PDA. While that device was still in development, Hawkins invited me to his office to see a mockup, which was a wooden block sculpted to look like what would eventually became the PalmPilot.
I asked Hawkins why he thought the Newton had failed. He pointed to his time at Grid Systems, which introduced the first real pen computing laptop in 1989 called the GridPad. It too had a low-level CPU and was not able to handle true character recognition. But it taught Hawkins that when it came to pen input and character recognition, one needed to follow an exact formula and write the characters as stated in the manual.
That is why the PalmPilot included the Graffiti writing system, which taught a user how to write a number, letter of the alphabet or specific characters (like #, $) in ways the PalmPilot could understand. I was one of the first to test a PalmPilot and found Graffiti to be very intuitive. One could call this a form of reverse programming since the machine was teaching me how to use it in the language it understood.
Fast forward to today, and I believe we have a similar thing going on with digital assistants.
One big difference this time around is that the processing power, along with AI and machine learning, makes these digital assistants much smarter, but not always accurate.
In what I think of as a Graffiti-like move, Amazon sends me weekly emails that include over a dozen new questions Alexa can answer. This too is sort of a reverse programming, as it teaches me to ask Alexa the proper questions.
From a recent email, here are some of the new things Alexa can respond to:
• “Alexa, what’s on your mind?” • “Alexa, what’s another word for ‘happy?'” • “Alexa, what can I make with chicken and spinach?” • “Alexa, call Mom.” • “Alexa, test my spelling skills.” • “Alexa, wake me up in the morning.” • “Alexa, how long is the movie Black Panther?” • “Alexa, speak in iambic pentameter.” • “Alexa, how many days until Memorial Day?”
These weekly prompts allow me and other Echo owners to understand the proper way to ask Alexa a question, and builds up our confidence in interacting with the platform.
I have no doubt that as faster processors, machine learning, and AI are applied to digital assistants they will get smarter. But I suspect that more and more companies that create digital assistants will also start using Amazon’s model of teaching people how to ask questions that are more in line with how their digital assistants want a query to be stated.
Tufts University professor Susan Landau has a long and distinguished background in computer security and policy that includes several books on wiretapping and surveillance. She has repeatedly argued that damaging our security by embedding holes for law enforcement — whether that’s wiretap equipment inside ISPs or exceptional access (that is, special decryption capabilities in encryption software) is a dangerous approach. Encryption is not sufficient to guarantee cybersecurity, but it is essential.
In Listening In: Cybersecurity in an Insecure Age, Landau considers the changing world in which law enforcement must operate with exceptional clarity. She begins with a brief history of cybersecurity. The first known cyberattack was in 1986, when Clifford Stoll began trying to understand a 75-cent discrepancy in computer time; he told the story in detail in his book The Cuckoo’s Egg. The next, and the first proper internet attack — although it wasn’t really intended as such — was the 1988 Internet Worm. Despite these early warnings, Landau writes, quoting from a US government report, “security lost to convenience in the 1980s. And then it kept on losing”. It wasn’t until 2008 that cyber-threats began to be taken seriously.
Throughout the 1990s, credit card theft, online bank robbery, and other financial crimes were growing. By the mid 2000s, the targets were expanding from corporate servers to nation states. In 2007, a DDoS attack on Estonia lasted nearly a month. In 2009, Iran began noticing problems with its nuclear centrifuges; in 2012, the reason was identified as Stuxnet, which brought cyber-attacks into a new era.
It was against this background that the ‘First Crypto War’, which pitted law enforcement against privacy campaigners, took place. In 2013, Edward Snowden’s revelations swung the pendulum back toward privacy, while the terrorist attacks in London, Paris, and San Bernardino swung it away again. When the FBI sought access to the iPhone belonging to San Bernardino shooter the court case reopened all those old wounds.
What does law enforcement need? It complains about ‘going dark’ because encryption limits access to content, but equally can take advantage of many more kinds of data than ever existed before — and most of it is never deleted. In one case Landau cites, a cryptic text message unraveled an insider trading case; in an earlier era, it might have been destroyed before law enforcement ever saw it. Linked databases, communications metadata, and analytics from companies like Palantir all play their part in helping law enforcement do its job, as does properly warranted legal wiretapping and hacking.
But during this time law enforcement has ceased to develop its own technical capabilities. Landau argues that a reversal is essential, and that law enforcement needs 21st century capabilities to conduct investigations that match those of its enemies and attackers.
Landau ends her book with the 2016 and 2017 attacks on the US and French presidential elections. Government’s role, she concludes, is to provide security — but not to prevent individuals from maintaining their own. It should, in other words, be helping us.
RECENT AND RELATED CONTENT
A Winning Strategy for Cybersecurity (ZDNet special feature) The smartest companies now approach cybersecurity with a risk management strategy. Learn how to make policies to protect your most important digital assets.
Like Apple, Dell has been steering its AIO range towards professional users in the last year or so, and the consumer-oriented XPS 27 has had its own ‘pro’ update in the shape of the Precision Workstation AIO 5720. The Precision AIO 5720 could even teach the iMac Pro a few things about streamlined design, as its 27-inch 4K display is surrounded by only the narrowest of borders; also, at 24 inches wide and 13.5 inches high, the glass panel is noticeably more compact than the 25.5 inch by 15.1 inch panel of its Apple rival.
The 5K display of the iMac Pro does have higher resolution, and both the iMac Pro and Microsoft’s Surface Studio support the broadcast industry DCI-P3 colour-space for video editing. However, the Precision’s 4K display (163.2dpi) produces an impressively bright and colourful image, and supports 100 percent of the Adobe RGB colour space, so it’s well suited to tasks such as graphic design, illustration, and photography. It also includes both HDMI and DisplayPort interfaces, allowing you to easily work with a multiple-monitor setup if you need to. The Precision AIO 5720 has another ace up its sleeve as well, as it allows you to remove the back panel in order to replace and upgrade both memory and storage — and with a little more effort it’s even possible to remove the processor too, making the whole system far easier to service than most of its AIO rivals.
You’ve got plenty of scope for customization too. The mid-range model reviewed here is competitively priced, at £1,588.82 (ex. VAT; £1,906.58 inc.VAT, or $1,926) with a Core i5-7600 processor running at 3.5GHz (4.1GHz with TurboBoost), 8GB of RAM, a 500GB hard drive, and a Radeon Pro WX 7100 GPU with 8GB of dedicated video memory. However, there are dozens of build-to-order options available, going right up to £2,916.12 (ex. VAT; £3,499.34 inc. VAT) with a 3.8GHz Xeon E3-1275 v6 CPU, 64GB of RAM and a 1TB solid-state drive (the US configuration peaks at a 3.7GHz Xeon, costing $3,757).
Those prices include Windows 10 Pro, but you can also add a license for Windows 7 Pro for another £32.50 (ex. VAT; £39 inc. VAT, or $21), or save £105 by choosing a ‘no-OS’ system that comes with only DOS on CD (an option that currently only appears to be available in the UK).
Those configurations and prices aren’t quite in the same league as the iMac Pro, and Geekbench 4 reveals raw processor performance for the Precision’s Core i5-7600 to be relatively modest, with scores of 4400 and 11,530 for single-core and multi-core performance respectively. In contrast, the iMac Pro’s 8-core Xeon hits 31,400 for multi-core performance, which will clearly give it an advantage for tasks such as video editing with multiple streams of 4K video.
Top ZDNET Reviews
However, the Precision AIO 5720’s great strength is its Radeon Pro WX 7100 GPU, which manages an impressive 129fps when running the demanding Cinebench R15 graphics tests, indicating that it will make a powerful and competitively priced workstation for graphics and design software.
There are some rough edges, though. The use of a conventional hard drive in our review unit proves to be a real bottleneck, with the ATTO Disk Benchmark recording speeds of just 129MB/s and 125MB/s for write and read performance respectively. There’s also a rather leisurely 45-second boot sequence to content with, followed by some further cursor-spinning before the system is fully ready to start work. Professional users with deadlines to worry about will probably want to upgrade to a faster solid-state drive. Alternatively, if you don’t mind removing the back panel, it’s also possible to install your own SSD using the PCIe slot inside the unit.
The internal speaker system is a little disappointing too. The Precision AIO 5720 boasts eight active drivers and two passive bass radiators — effectively building a large soundbar into the base of the display for what Dell claims is ‘studio quality’ music recording and editing. Yet the sound emanating from the Precision seemed quite thin and insubstantial when listening to both streaming video and audio CDs.
Also, given the attention to detail in this system’s compact and upgradeable design, it’s odd that the various ports and connectors are infuriatingly inaccessible. There’s one USB port on the right-hand edge of the display, but the other four USB ports, a pair of Thunderbolt 3 ports, RJ-45 Ethernet, HDMI and DisplayPort connectors are all set into a cavity on the back panel and hidden behind the pedestal stand that supports the display.
The Precision 5720 is not without its flaws, but its high-end graphics performance ensures that it’s well suited to tasks such as photography, design and illustration work. It’s also considerably less expensive than rivals such as the iMac Pro and Microsoft’s Surface Studio. And with its relatively easy user repairs and upgrades, it should find favour with budget-conscious IT managers too.
Most major automakers have committed to converting their entire future fleets to some form of electric power, and if you look at autonomous vehicle developers like Waymo, it’s clear that electric vehicles (EVs) will go hand in hand with self-driving technology.
Unlike traditional internal combustion engines, EVs don’t require oil changes, have fewer moving parts to wear out, and rarely break down, which should worry the more than 160,000 independent auto repair shops in the US. In an EV-dominated future, gas stations will eventually be hit hard.
Even if you pay at the pump with a credit or debit card, chances are some of your fuel stops include a trip inside a gas station or convenience store to buy a drink or snack. In fact, Jeff Lenard of the National Association of Convenience Stores told The Washington Post that fuel sales comprise just 40 percent of gas station profits. The rest of revenue is rung up inside the store, and beverages are the bulk of sales.
For example, 63 percent of US sales for Monster Beverages comes from gas stations and convenience stores, the Morgan Stanley report revealed. Other convenience store purchases, such as alcoholic beverages and tobacco products, may not see the same declines since when people fuel up they tend to buy beverages impulsively and consume them right away.
“Beverages drive sales, and beverages drive profits at convenience stores,” Lenard added. “So any competition that could reduce those sales and those profits is a concern.”
The Morgan Stanley report acknowledged that EVs currently comprise only a small fraction of the total vehicles currently on US roads. From December 2010 to February 2018 the combined market share of all EVs in proportion to traditional internal combustion engine new vehicle sales was just 0.7 percent, according to InsideEVs. Convenience store industry analysts believe that any significant damage to their bottom line—and beverage sales—due to electric cars is still decades away.
But given automakers’ commitment to EVs, rapid market growth is expected. So convenience stores might want to consider catering to EV owners, much like the Mid-Atlantic gas station chain Sheetz (where customers can order food items such as drinks and sandwiches via touch screen while filling up) has partnered with Tesla to install Supercharger stations.
And while the Journal estimates that 80 percent of EV owners charge at home and a full battery can get them where they want to go and back in a day, road-tripping EV owners may spend more time—and more money—at convenience stores like Sheetz since it takes much longer to top off at a public charging station than it does to fill a gas tank.
“I think [convenience] stores will do what they always do,” Lenard added. “They’ll find a better way to compete.” And there will always be a convenient place to stop in for a beverage or snack even if you just need fuel for your body.
In 2012, a group of scientists from the University of Toronto made an image-classification breakthrough.
At ImageNet, an annual artificial intelligence (AI) competition in which contestants vie to create the most accurate image-classification algorithm, the Toronto team debuted AlexNet, “which beat the field by a whopping 10.8 percentage point margin… 41 percent better than the next best,” according to Quartz.
Deep learning, the method used by the team, was a radical improvement over previous approaches to AI and ushered in a new era of innovation. It has since found its way into education, healthcare, cybersecurity, board games, and translation, and has picked up billions of dollars in Silicon Valley investments.
Many have praised deep learning and its superset, machine learning, as the general-purpose technology of our era and more profound than electricity and fire. Others, though, warn that deep learning will eventually best humans at every task and become the ultimate job killer. And the explosion of applications and services powered by deep learning has reignited fears of an AI apocalypse, in which super-intelligent computers conquer the planet and drive humans into slavery or extinction.
But despite the hype, deep learning has some flaws that may prevent it from realizing some of its promise—both positive and negative.
Deep Learning Relies Too Much on Data
Deep learning and deep neural networks, which comprise its underlying structure, are often compared to the human brain. But our minds can learn concepts and make decisions with very little data; deep learning requires tons of samples to perform the simplest task.
At its core, deep learning is a complex technique that maps inputs to outputs by finding common patterns in labeled data and using the knowledge to categorize other data samples. For instance, give a deep-learning application enough pictures of cats, and it will be able to detect whether a photo contains a cat. Likewise, when a deep-learning algorithm ingests enough sound samples of different words and phrases, it can recognize and transcribe speech.
But this approach is effective only when you have a lot of quality data to feed your algorithms. Otherwise, deep-learning algorithms can make wild mistakes (like mistaking a rifle for a helicopter). When their data is not inclusive and diverse, deep-learning algorithms have even displayed racist and sexist behavior.
Reliance on data also causes a centralization problem. Because they have access to vast amounts of data, companies such as Google and Amazon are in a better position to develop highly efficient deep-learning applications than startups with fewer resources. The centralization of AI in a few companies could hamper innovation and give those companies too much sway over their users.
Deep Learning Isn’t Flexible
Humans can learn abstract concepts and apply them to a variety of situations. We do this all the time. For instance, when you’re playing a computer game such as Mario Bros. for the first time, you can immediately use real-world knowledge—such as the need to jump over pits or dodge fiery balls. You can subsequently apply your knowledge of the game to other versions of Mario, like Super Mario Odyssey, or other games with similar mechanics, such as Donkey Kong Country and Crash Bandicoot.
AI applications, however, must learn everything from scratch. A look at how a deep-learning algorithm learns to play Mario shows how different an AI’s learning process is from that of humans. It essentially starts knowing nothing about its environment and gradually learns to interact with the different elements. But the knowledge it obtains from playing Mario serves only the narrow domain of that single game and isn’t transferable to other games, even other Mario games.
This lack of conceptual and abstract understanding keeps deep-learning applications focused on limited tasks and prevents the development of general artificial intelligence, the kind of AI that can make intellectual decisions like humans do. That is not necessarily a weakness; some experts argue that creating general AI is a pointless goal. But it certainly is a limitation when compared with the human brain.
Deep Learning Is Opaque
Unlike traditional software, for which programmers define the rules, deep-learning applications create their own rules by processing and analyzing test data. Consequently, no one really knows how they reach conclusions and decisions. Even the developers of deep-learning algorithms often find themselves perplexed by the results of their creations.
This lack of transparency could be a major hurdle for AI and deep learning, as the technology tries to find its place in sensitive domains such as patient treatment, law enforcement, and self-driving cars. Deep-learning algorithms might be less prone to making errors than humans, but when they do make mistakes, the reasons behind those mistakes should be explainable. If we can’t understand how our AI applications work, we won’t be able to trust them with critical tasks.
Deep Learning Could Get Overhyped
Deep learning has already proven its worth in many fields and will continue to transform the way we do things. Despite its flaws and limitations, deep learning hasn’t failed us. But we have to adjust our expectations.
As AI scholar Gary Marcus warns, overhyping the technology might lead to another “AI winter“—a period when overly high expectations and underperformance leads to general disappointment and lack of interest.
Marcus suggests that deep learning is not “a universal solvent but one tool among many,” which means that while we continue to explore the possibilities that deep learning provides, we should also look at other, fundamentally different approaches to creating AI applications.
Even Professor Geoffrey Hinton, who pioneered the work that led to the deep-learning revolution, believes that entirely new methods will probably have to be invented. “The future depends on some graduate student who is deeply suspicious of everything I have said,” he told Axios.
Unlike the amusing gags of the past, this year’s April Fools’ Day was an incredible dud. Elon Musk, for example, “joked” that Tesla was broke.
The gag wasn’t too big of a stretch, though, and Tesla stock took a dive on Monday. Oops. (No worries. It rebounded on Tuesday.)
Tesla Goes Bankrupt
Palo Alto, California, April 1, 2018 — Despite intense efforts to raise money, including a last-ditch mass sale of Easter Eggs, we are sad to report that Tesla has gone completely and totally bankrupt. So bankrupt, you can’t believe it.
Musk missed the point here; it was along the lines of telling your spouse that you have cancer, then saying “April Fools!” These jokes should be simple, obvious when deconstructed, and have hints within that they are, indeed, jokes. Above all the joke is not on you, but on someone else.
It’s likely that Musk thinks that as a CEO of a public company he can tweet whenever he wants to. After all, Trump does it and he’s the president of the country. So what possibly could go wrong?
Let’s assume that Musk is monitored by corporate media folks. They have to worry about SEC rules and public perceptions. Did they tell Musk the tweet was not acceptable? Since Musk is perceived as a business genius, I can see him laughing off the concerns and convincing them that it was a classic April Fools’ joke and a great promotion for the company because it ridicules the Tesla skeptics.
Oh. Okay, boss.
Sadly, the internet has made April Fools’ gags useless and outdated. Every day is April Fools on the web. Hoaxes abound.
Another problem I have is institutionalizing jokes. A kid in the sixth grade teasing a classmate is one thing, but a billion-dollar corporation making fake announcements is a new level of tomfoolery.
Since April 1 is not recognized by any government authority, I believe a corporation should be fined by regulatory agencies for false statements. If the stock price dips and you are day trading, you may have been defrauded.
Bring action against one of these corporations, and they will all suddenly realize that they are not comedians. Start with Tesla.
Windows Server is moving to the faster six-month release cycle of the Windows client and staying as a server OS that comes out every two to three years. This split personality is managed through what Microsoft calls ‘channels’: the Semi Annual Channel (SAC), which includes only the GUI-less Server Core and Nano server; and the Long Term Support Channel (LTSC), which includes Server Core and the full version with Desktop Experience.
Due for release in the second half of 2018 (very possibly at Microsoft’s Ignite conference in September), Windows Server 2019 is the first LTSC version that can take advantage of the features that have been incubated through the SAC releases — for example, a much smaller Server Core image size, or the Windows Subsystem for Linux (WSL). As in Windows 10, WSL means you can install multiple Linux distros and use them to run Linux scripts and (command-line) utilities. Unlike Windows 10, Server 2019 doesn’t have the Windows Store, so you need to know the direct download link for the distro you want and the PowerShell commands to download, unzip, and install it.
Windows Server 2019 also brings the Windows 10-style desktop to the server, replacing the Windows 8 GUI from Windows Server 2016. The cascading menus of the Start menu are a better fit for a server than the finger-friendly live tiles that took over the whole screen, but the way the Windows 10 Start menu relegates ‘Run as Admin’ to the secondary More flyout on context menus makes it far too fiddly for something server admins do so often. As with Windows 10, system settings are divided between the control panel and the modern Settings panel in ways that can make tasks like joining the server to a domain involve more clicks than you’re used to — especially as the handy context menu that appears when you right-click on the Start button no longer includes the control panel.
Settings aren’t exactly the same as on Windows 10: adding a local account brings up Users and Groups, for example, while some Windows 10 settings — like connecting to an Android or iOS phone to sync browser tabs — seem inappropriate to a server and will likely disappear in later builds. If you were hoping that the server OS would make the transition from the control panel more coherent, it’s clearly still a work in progress — and of course much more of the emphasis for Windows Server management is on PowerShell.
Particularly interesting for companies with servers that haven’t been upgraded in a while is support for direct, in-place upgrade from both Windows Server 2016 and Windows Server 2012 R2. This works in the preview but you obviously won’t want to try it on your production systems. Annoyingly, the installer offers the upgrade option even on systems that don’t have a previous version of Windows Server to upgrade, and if you choose it the installer insists that you exit and start the installation again from scratch.
It’s also worth noting that a bug in the preview image means that if you’re using DISM or other deployment tools to install Windows Server 2019, rather than using the ISO, the naming of installation options is incorrect so you need to use the index numbering in scripts: 1 for Server Core Standard; 2 for Server Standard w/Desktop; 3 for Server Core Datacenter; and 4 for Server Datacenter with Desktop Experience.
Beyond point and click
Top ZDNET Reviews
In the final release, the ability to upgrade in place will be especially useful for smaller businesses who don’t have extra hardware to use for migrating to a new server release. In principle, Project Honolulu offers those customers the option of moving to Server Core, which is a big security advantage because Server Core needs far fewer security updates (and fewer reboots).
For simple server management, Honolulu is a friendly interface that comfortably replaces Server Manager. It runs as a gateway anywhere on your network and offers everything from a file browser to hyperconverged cluster and Storage Spaces Direct management, complete with a detailed new view of SSD performance history right down to individual drives and network adapters. (Because it’s under development and works with older versions of Windows Server, Honolulu is a separate install, but it’s clearly part of the direction for Windows Server in the long term.)
But once you make the leap to more powerful options in Honolulu (connecting it to Azure Active Directory to use the new hybrid cloud options like setting Azure Backup and File Sync for your server, for example), you still need to get comfortable with installing PowerShell modules and running PowerShell scripts. We’d like to see that become simpler in later versions, to give smaller companies with less expertise access to the advantage of cloud connections. More experienced admins may hope for similar connectivity to other cloud services, but this is Azure only.
The Azure services you can connect to Windows Server 2019 need subscriptions. A particularly interesting option is Windows Defender Advanced Threat Protection. ATP is a ‘post-breach’ service that detects suspicious behaviour that anti-malware hasn’t been able to block, and having that extended to servers is excellent news.
Confusingly, Windows Defender ATP Exploit Guard in Server 2019 is only related to the Azure service because you can use it for reporting on events related to it (the name and many of the features come from Exploit Guard in Windows 10). It’s a set of rules, controls and EMET-style vulnerability exploit mitigations you can use to block scripts, suspicious files, lateral movement, outbound connections to untrusted hosts and access to protected folders by untrusted processes.
Shielded VMs can now protect Linux VMs as well (Ubuntu, RHEL and SUSE Enterprise Server are supported), giving them a virtual TPM and BitLocker encryption as well as checking the health of the host Hyper-V system. To make this more robust on less reliable networks you can now create a fallback connection to the Host Guardian Service that runs the health check, and even configure Shielded VMs to run without the ability to connect for the health check as long as the host’s security configuration hasn’t changed since it was last checked. VMConnect Enhanced Session Mode and PowerShell Direct can connect to shielded VMs if they’ve lost network connectivity so you can update them and get them back online. The ability to encrypt the virtual subnet on which important VMs run without having to make complex changes to the VMs means they don’t leak data from network traffic. This combination of features updates some important security features, making them more robust and more useful for the increasing number of organisations that run both Linux and Windows Server.
The first SAC release of Windows Server caused some confusion because it didn’t include Storage Spaces Direct (although if you upgraded a server that had it installed, it carried on working). That didn’t indicate anything about the future of the feature, just the emphasis of that release on DevOps scenarios like containers. The performance history isn’t the only new option for Storage Spaces Direct in this preview; if you want to improve fault tolerance you can now manually delimit the allocation of volumes. Instead of spreading data out as small ‘slabs’ that are distributed across every drive in every server for performance, you can limit the slabs to a subset of servers. If three servers fail when the slabs are evenly distributed, it’s very likely that at least some of the data will be unavailable until you recover the servers; if three servers fail when the data distribution is limited to fewer servers, it’s more likely that the surviving servers have all the data and you can carry on using the volume. So far this is a PowerShell-only option, but it definitely gives you more nuanced choices about performance and availability.
The Remote Desktop Session Host (RDSH) role isn’t included in this preview build. Microsoft is clear that Remote Desktop Services isn’t going away, but what’s unclear is whether it’s just that RDSH isn’t in this preview, or whether it’s going to be replaced (or more likely, supplemented) by a host role that runs on Windows 10 desktops.
This Insider Preview is both a solid release and a frustratingly minimal set of new features for Microsoft’s next big server OS release. Clearly, what’s included is a subset of what’s planned, and it seems likely that releasing this preview was intended to avoid a new SAC release coming out without any news about the full version. Organizations planning their upgrades might prefer to know more about the key scenarios they’ll be upgrading for, especially as the cost of Client Access Licences seems set to go up. So far, it’s improved security (especially for Linux VMs), container support (especially for Kubernetes), massive hyperconverged-infrastructure-scale with cluster sets, and hybrid cloud options with Azure and Project Honolulu.
Windows Server 2016: The smart person’s guide(TechRepublic) This guide covers details about Windows Server 2016, such as new features, minimum requirements, install options, and how Microsoft’s virtualized services seamlessly integrate with the cloud.
For the past couple of years, Fitbit has flirted with making a smartwatch to compete with the Apple Watch, Pebble, and Samsung. But, until recently, the company had always stopped short of calling one of its products a smartwatch.
Enter the $199 Versa: A smartwatch that looks unlike anything the company has done before. It’s not too big, checks all the boxes of a fitness tracker, and includes some new smartwatch tricks from Fitbit.
A Fitbit that looks like a watch
The Versa isn’t Fitbit’s first watch-like device, but it is the first one that looks like a watch. Previous products like the Surge, Blaze, or even the Ionic looked more like an activity tracker than a watch. Boxy designs do not lend themselves to a svelte looking watch.
With the Versa, the company focused on creating a watch that was more appealing to the overall market, instead of fitness enthusiasts first.
Top ZDNET Reviews
The end results is a watch that reminds me a lot of the Pebble Time 2 (Fitbit acquired Pebble in late 2016), with rounded edges and corners. According to Fitbit, however, the Versa’s design was something that came before the Pebble acquisition and the similarities are merely coincidental.
The watch itself has a display that measures 1.34 inches. Fitbit’s promotional images make it look bigger than it actually is. With the goal of making a watch that appeals to male and female users, the Versa’s size is slightly smaller than the 42mm Apple Watch.
I would prefer a Versa that’s a bit bigger than its current size, but the smaller display and footprint isn’t a deal breaker by any means.
The Versa has a total of three physical buttons: A single button on the left side, and two on the right side. The buttons are used to navigate the watch’s interface or act as shortcuts to apps and settings when long-pressed.
The color touchscreen is responsive when using gestures to scroll through notifications or rearrange installed apps.
On the bottom of the watch is the charging port that works in tandem with the included cradle to charge the Versa in roughly two hours. Also on the underside is the necessary sensor for heart rate tracking.
In the review package, Fitbit included the Versa, which comes with a standard watch band, a leather band, and a metal links band. Switching bands is a breeze, thanks to the quick-release mechanism. After using all three bands over the past couple of weeks, I found the classic and leather bands the most appealing. The metal link band is comfortable, but the clasp is a chore to open and close.
While I didn’t receive any compliments about the Versa on my wrist, I don’t have any issues wearing the Versa to workout or to an event where fancy attire is required.
Killer app — and then some
Fitbit’s approach to everything it does has always revolved around helping its users become healthier. Whether it’s by counting steps and tracking sleep or building a community of fellow users to cheer you on and compete against.
With the Versa, Fitbit is using the millions of steps, nights of sleep, and countless other metrics it has collected over the years and putting it to use. A new Today view shows your current activity stats such as steps or resting heart rate and offers advice on what you can do to increase activity or sleep better.
The Versa sticks to Fitbit’s staple tracking of steps, distance, calories burned, sleep, heart rate, floors climbed, active minutes, and stationary time. Hourly activity — an opt-in feature — cheers you on to reach 250 steps each hour.
Even though there’s a long list of apps available for the Versa, in use it’s clear the company’s main focus is health. And, for some, that is the most important app.
For others, however, the smartwatch features are what makes the Versa attractive. Currently, Android and iOS devices are supported, and for the most part, offer feature parity on both platforms. Notifications display on the Versa, and call alerts give the option to send an incoming call to voicemail or answer it on a connected phone.
The Android or iOS Fitbit app is used to control syncing, log activity, fine-tune the alerts you receive on your wrist, and it’s also how you access the app and clock face stores.
Neither store is overflowing with options, but that’s not necessarily a bad thing. Developers are still trying to figure out the best use for a small screen, and having a store overrun with apps that don’t offer any real value is frustrating for the user.
It’s not that Fitbit needs a long list of high-quality apps to excel as a smartwatch maker, but the overall look of third-party apps feels somewhat generic.
Not all Fitbit OS apps are a waste. A Starbucks app displays your Starbucks card number and barcode for easy payment, for example. The New York Times also has an app that is designed to quickly catch you up on the day’s news.
Navigating Fitbit OS is primarily done with gestures. From the clock face, a swipe up reveals the today view, a swipe to the left displays installed apps, and a swipe down lists notifications.
Long-pressing the left button acts as a shortcut to music controls for music that’s stored on the watch itself, or controlling music on your phone, along with quick settings to disable notifications and the display.
Currently, Versa supports Pandora and Deezer music streaming services, as well as syncing your own music library from a computer to the watch’s onboard storage, capable of storing roughly 300 songs. Versa works with any Bluetooth headphones or earbuds for listening to music without a phone nearby. Spotify support is something the company is looking into but doesn’t have anything to announce quite yet.
Fitbit claims four or more days of battery life on the Versa. After initial setup and tinkering with the watch, my first charge netted a respectable — but disappointing — 2.5 days of use. Subsequent charges have pushed over six days of use, so don’t get discouraged if you aren’t quite hitting the four-day estimate right away.
There are technically two different models of Versa, at least here in the US. There’s the standard Versa, and then there’s a Special Edition model. Outside of a different color and extra watch band, the biggest difference between the two is Fitbit Pay support.
At $229, the special edition model includes an NFC chip to support mobile payments through Fitbit Pay. In talking with Fitbit about the decision to release two models, the decision really boiled down to lack of adoption of mobile payments in the US. Both consumers and retailers have been slow to adopt mobile payments.
The narrative surrounding Fitbit’s fitness trackers has transitioned from users not really needing or wanting a smartwatch, to embracing the fact that the future of the wearable industry is the smartwatch.
Fitbit OS, the operating system running on the Ionic and now Versa, was born out of the company’s acquisition of Pebble, a pioneer of the smartwatch. Fitbit now has dedicated app and clock face stores.
The company has reiterated its commitment to continue developing fitness trackers alongside smartwatches, but the about-face in attitude toward what a wearable is and should do surely will influence Fitbit’s product roadmap.
ZDNet recently spoke with Fitbit co-founder and CEO James Park to discuss the company’s future and will have a lot more to share soon.
There’s room for Versa
Of course, there’s still a lot of work for Fitbit to do to compete with the Apple Watch. But Fitbit feels it has positioned itself for success, leveraging its health ambitions and a $199 price point that’s appealing to most consumers.
In May, the company plans to release an update that adds female health tracking to the Fitbit app and Versa, along with the ability for Android users to reply to messages directly from the watch.
Continuous updates are part of the puzzle to creating a compelling smartwatch, and Fitbit is well aware.
The Apple Watch is still the best smartwatch for those who want a watch that works seamlessly with the iPhone — and that will be true as long as Apple limits what developers can and can’t do with its platform.
As for the Versa, well, it’s nice to finally have a reliable and affordable alternative that works across multiple platforms. I have zero hesitation in recommending the Versa. Welcome to the smartwatch race, Fitbit. It’s about time. The Versa is available for preorder right now and will begin shipping in one to two weeks, according to Fitbit’s website.
The Fitbit Ionic brings integrated GPS, onboard storage for music playback, smart notifications, wearable payment support, and an application platform in a very attractive and comfortable form factor with a battery life of up to four days.