Amazon Web Services (AWS) has released SageMaker, a fully managed end-to-end machine learning service and a video camera that runs deep learning models called DeepLens in an effort to bring machine learning to the enterprise.

The SageMaker machine learning model can run on general instance types or GPU powered instances, beginning with a Jupyter notebook for data exploration, cleaning, and reprocessing of the data.

And enterprises can utilize any of common supervised and unsupervised learning algorithms and frameworks which are built into the product, or create their own model. And the training can scale to tens of instances to support faster model building.

Andy Jassy, Amazon Web Services' CEO at the Re:Invent conference in Las Vegas, stated that builders don’t want machine learning to be so difficult. They don’t want it to be so cryptic. They don’t want it to be black box. They want it to be much easier to engage with.

Simplifying the building of the machine learning models, Jassy said the techniques will be within reach of businesses without the need to employ specialists. Which according to him, there aren’t that many machine learning expert practitioners in the world nowadays.

The DeepLens come loaded with a set of pre-trained machine learning models to give developers ‘hands on experience’ in image detection and recognition. Albeit, developers can also train their own models with SageMaker and run them on the camera.

The $245 high definition camera can capture 1080P video, and sound through a 2D microphone array.

It runs Ubuntu 16.04 and is preloaded with AWS’ Greengrass Core. While there’s also a device-optimised version of MXNet, and other frameworks such as TensorFlow and Caffe2 can also be employed.

DeepLens can run incoming video and audio through on-board deep learning models quickly and with low latency, making use of the cloud for more compute-intensive higher-level processing.

AWS SageMaker and DeepLens to bring Machine Learning to the enterprise

While the Galaxy S8 was launched to the public only a few months ago, it seems as if the developers at Samsung are eager to entice the public with an even more updated version of this flagship brand. The Galaxy S9 is set to be released sometime during the first half of 2018 and analysts have been eager to catch a glimpse of what is in store. Will this model prove to be a rival for the iPhone X? What prices can we expect to pay? Are there any features which will stand out above the rest? Let us take a quick look at some rumors as well as a handful of reliable predictions.

The Price Tag

The consumers have been naturally eager to learn how much the S9 and S9 Plus will cost. There is no way to ascertain an exact figure, but it will likely mirror the prices attributed to the S8 and the S8 Plus. If we follow this logic, buyers can expect to pay anywhere between £680 and £780 pounds (for the Samsung Galaxy S9 Plus).

Rumored Technical Specifications

We should expect this next model to boast superior levels of processing power as well as a continued user-friendly appeal. While Samsung is not likely to break the mold in terms of revolutionary accessories, the S9 is still predicted to rival the top brands in the business. At the time that this article was written, here are some of the the anticipated specs of this upcoming upgrade:

  • Will Feature Super AMOLED screen.
  • Screen sizes between 5.8 inches (for the standard model) and 6.2 inches (for the S9 Plus).
  • An embedded biometric fingerprint scanner.
  • An AI assistant.

In terms of processor, many feel that the S9 will employ the Qualcomm Snapdragon 845 within American markets and the Exynos in the United Kingdom.

What's the Futuristic Appeal?

One of the issues involved with smartphone branding is that customers will recognise a model based just as much off of its physical appearance as the functions offered. So, Samsung will have to find a delicate balance between outer innovation and a familiarity that customers can identify with.

It is rumoured that the bezels along the edge of the display screen might be slightly refined to appear narrower and some even feel that the rear side of the phone will closely mirror the design espoused by Apple. However, a great deal of interest has centred around the camera that is said to be present.

Dual Rear-Facing Camera

The biggest improvement is thought to be the inclusion of a dual rear-facing camera. The main reason why this feature should become present involves the previous fingerprint scanner. It is said that Samsung will ditch its rear fingerprint scanner in favour of one which is present within the display. Thus, there is much more room to include a more advanced camera. Dual cameras should definitely appeal to those who are fans of snapping high-definition images or the occasional selfie.

Other Details

Of course, the details of the Samsung Galaxy S9 will become clearer as we approach its release date. Questions such as whether or not the company will eliminate the existing headphone jack will hopefully be answered in the interim.

Regardless of how advanced the S9 proves to be, we should never forget that it is always possible to enjoy substantial savings on previous models through the use of online comparison portals such as Broadband Choices. 2018 should be a very interesting year in terms of advancements in smartphone technology.

A Peek at the upcoming Samsung Galaxy S9 and S9 Plus

Quantum key distribution (QKD), as a secure communication method is the process of distributing keys during a transmission with an implementation of cryptographic protocol involving components of quantum mechanics.

It uses lasers to transmit multiple bits at once, which can be used to connect and secure quantum computers, enabling two parties to produce a shared random secret key known only to them, used to encrypt and decrypt messages.

While the quantum key distribution encryption system relies on the quantum mechanics architecture, which in contrast to traditional public key cryptography, cannot provide any mathematical proof as to the actual complexity of reversing its one-way functions.

And the QKD systems transmit data “between tens to hundreds of kilobytes per second,” a rate not sufficient for most uses, for both chat and telephony purposes.

Now, group of researchers at Duke University, OSU and Oak Ridge National Laboratory have solved what's perhaps the biggest problems of quantum key distribution.

The researchers where able to inject more information into each photon transmitted by adjusting the release time and the phase, thereby encoding two bits instead of just one. Effectively, making it easy to transmit keys quickly and securely, but most importantly, faster over fiber optic cables.

Daniel Gauthier, a professor of physics at The Ohio State University, said that we are now likely to have a functioning quantum computer that might be able to start breaking the existing cryptographic codes in the near future. And that we really need to be thinking hard of the different techniques that we could use to secure the internet.

Apart from the single-photon detectors, every other components involved in the breakthrough research already existed in the telecommunications industry, even as the system utilizes off-the-shelf parts, which barring the detectors, nothing else is unavailable with telecommunications providers.

Big Steps forward for Quantum Key Distribution Encryption

Microsoft has released the beta of Windows Compatibility Pack, which brings 20,000 APIs to .NET Core for Windows, Linux, and MacOS, to make it more like the Windows-only .NET Framework. With the extended API access to help web developers move code from Windows-oriented .NET Framework to cross-platform .NET Core.

While .NET Framework spotlight is on Windows desktop development; the open source .NET Core is optimized for building web applications for Windows, Linux, and MacOS.

And as .NET Core enables web applications that can scale and run on Linux; the addition of the .Net Framework APIs makes it even more resourceful.

But migration follows a series of steps, for instance, migrating from ASP.NET MVC application deployed on a Windows Server to an ASP.NET Core application on Linux on the Azure cloud, the company advises migrating to ASP.NET Core (ASP.NET Core extends .NET Core for web development) while still targeting .NET Framework.

And to move to .NET Core while still on Windows, before moving to Linux and Azure.

Albeit, some developers want to use Microsoft desktop technologies such as WinForms, Windows Presentation Foundation, or ASP.NET, so may decide to maintain coding on .NET Framework.

Microsoft, however offers a complete guide on migrations that includes identifying third-party dependencies and use of the company’s API Portability Analyzer tool, among others.

How Microsoft is helping Developers move to Cross-platform .NET Core

A new feature revealing how many users are “talking about” popular tweets has shown up under some embedded tweets appearing outside the Twitter platform on the web, which replaces both the retweet and number of replies, cumulative metrics.

The feature is perhaps part of Twitter’s move to explore other ways of providing more contextual information about tweets.

While the network had employed some metrics which might not be that familiar to audiences outside the Twitterverse, like the retweets and replies metrics that are harder to understand than a direct expression of the actual number of users engagement.

The new metric seems to replace both the retweet and replies totals with one, cumulative metrics.

And the simplicity of the information, as embeds where both users or potential advertisers that are not familiar with the platform are most likely to see them, is indeed a marketing ploy.

But, as always expected, there isn't any guarantee that this feature will see the light of day, as with most of it’s former experiments, though the engagement metrics present another smarter way to make its advertising reach measurable.

Twitter wants to show Number of Users ‘talking about’ a Tweet

The Quartz report making the rounds is that Android system, which handles messaging services to ensure delivery of push notifications, requests the unique addresses of mobile phone masts (Cell ID), which makes it possible for Google to track the location of Android users.

Android phones are able to gather location data, even when location is actively turned off in settings, or even if you've not inserted a SIM card from any carrier.

Through the pinging of Google’s servers using a system known as Firebase Cloud Messaging, the Cell ID codes (a unique number that identifies a cell tower), can be used to determine a device’s location.

And the location data is especially beneficial to advertisers who want to target customers based on their geographic information, through Google's online AdWords advertising program.

Google, however states that the Cell ID codes were never stored on its network system, meaning that the data is discarded almost immediately. And promises that it will no longer request such data from Cell ID codes.

But, the company is still using other codes, like: the mobile country code (MCC) and mobile network codes (MNC), to “provide necessary network information for messaging and notification delivery” and these codes can provide a device’s location to some apps.

Google to end the practice of tracking Android Cellular location

Following the furore about YouTube Kids allowing horror and troubling videos to get past its filters on an app designed primarily for youngsters, Google has tightened the leash around the kid-friendly platform.

While the algorithmic fiasco allowed some nefarious YouTubers to attempt to slip inappropriate content into the designated kid-friendly streams,  such creators will have a harder time getting viewers and monetization.

Google's leash around kid-friendly content includes: Stricter control, understanding context, and Comment patrol on videos of kids.

Stricter control: Now excludes “content with family entertainment characters but containing mature themes or adult humor.” And content featuring minors that may be endangering a child, even if that was not the uploader’s intent” according to Google.

Understanding context: As YouTube grows it has doubled the number of Trusted Flaggers to better track who is behind a video, which in most cases, even a spooky educational content doesn’t really qualify as such. For instance, Dora the Explorer's sexy cosplay, which though the original theme pertains to kids educational programming, but denotes an erotic extension not appropriate for kids.

Comment Patrol on Kids video: This emphasizes on videos with inappropriate comments about the kids in them, in which case Google will have the comments turned off altogether.

Google hopes the measures will go a long way in curbing the growing infiltration of inappropriate contents on YouTube Kids, though, it claims such contents are just a tiny fraction of YouTube Kids’ universe.

Google's crackdown on YouTube kid-focused videos abuse

Instagram new feature now makes it possible for you to request to join someone’s live stream, while watching it, and if accepted, you can start broadcasting live along with the host. It already allow anyone streaming live to invite a friend to join, but the new feature now lets you initiate the request.

The feature invariably shifts the possibility for collaboration to the guest, and as allowing them to place a direct request to host, will actually enable more people to participate in a shared experience.

Instagram, however may need to put in place a screening system that would let the host verify the guest and if inappropriate content is shared, be able to boot the offending guest.

Albeit, the issue of someone sharing inappropriate content when they join, isn't any different from allowing random people to comment on your web contents.

But the ability to see pending requests, will allow the host pick a suitable fan to join the live stream rather than at random, or even someone with an insightful comment in the chat could be chosen from the request list.

The possibilities for collaboration will be greatly enhanced, as the feature rolls out, and may mean more interaction on the social platform. Just imagine what it will feel like for the fans “taking calls” from a celebrity, or exchanging pleasantries.

Instagram makes it Possible for Users to request to join Live Stream

While smart speakers like Amazon Echo were initially labeled as “crazy,” “weird,” and as “not very useful” - consumers calling it “pointless” because virtual assistants already existed on the phone, and talking to the room seemed frivolous.

Now fast forward to present day, with smart speakers in the home, you'd just talk and get an audible response or actions (like turning on the heater). And the consumers are gradually realizing that they really need the virtual assistant-powered smart speakers.

The smartphones can’t possibly replicate the “smart speaker capabilities,” despite the hands-free options available; when a smartphone is in the pocket or purse, you can’t just talk to the virtual assistant.

Thus, the Amazon Echo became a runaway hit, now emulated by the industry’s biggest technology companies.

Nowadays, consumers are gradually realizing they don’t want to be without virtual assistant-powered smart speakers such as the Amazon Echo line, Google Home and Apple HomePod.

The growing dependence on the “smart speaker behavior” of just talking when you need an action performed in the home, will make you want to crave for it. And the next few years, will see these smart speakers joined by hundreds of wearables that also give instant access to the virtual assistants.

With the AI infused wearbales providing extra security, and eliminate vulnerabilities in voice authentication.

Then the growth of AI wearables and the demonstrated ability of secure voice ID means the convergence of a future where the virtual assistants and wearables are the “killer apps” for the smart home.

Why the Future of Virtual Assistants is Smart Home; not the Smartphone

Twitter following outrage from users on handling cases of abuse, have rewrote its user guidelines, as part of a broader effort to tackle trolling and abuse, and to make sure users know what’s allowed on the networking service.

While the company hasn’t been clear about how it penalizes users — as a formal policy, it never commented about individual offenders, and its punishments have been rather unnoticed.

The company has published new details outlining all of the different types of punishments meted out on defaulters, albeit many users aren't aware of the many different techniques Twitter uses to try and keep people in line.

From obvious unnoticed, the punishments ranges from use of algorithm to show your tweet to fewer people, and to the rather rash lifetime ban on the service.

Twitter’s enforcement strategy is that it uses a number of data set to determine the severity of punishment to dole out. And one of the major contributing factors is how newsworthy a tweet is, which requires subjective analysis.

It also looks at other criteria like who reported the tweet, and if or not the offending user has a history of abuse or harassment on the platform.

And based on severity of offense, Twitter can temporarily lock you out of your account until you verify your identity — using either a phone number or email, which eliminates the possibility of anonymous trolling.

Twitter verifies notable users of public interest so that you know you’re hearing from the real person, not an imposter. The company, now is saying, formally, that it can take away the verified status if a person violate the company’s rules.

Twitter Guidelines on Penalty for Nasty Tweets or messages

Brookings Institution, a century-old American research group in its Metropolitan Policy Program, reported based on a study that in the last decade an increasingly broad range of jobs requires employees to work with computers on a daily basis.

While those highly digital jobs have seen higher productivity growth, as such, such jobs requiring more tech proficiency also pay more.

The Brookings' report analyzed changes in the digital needs of over 500 occupations covering 90 per cent of the workforce in select industries from 2001 - 2016.

For workers in highly digital occupations, the mean annual wage range reached $72,896 in 2016. While those in mid-level digital jobs earned $48,274 and the low-level digital jobs were paid $30,393.

And all these earnings can’t be explained away by just academic qualification: in respective of the worker level of education, computer skills still brought in a premium wage — one that’s almost doubled since 2002.

The more savvy you are with a computer affects your salary in jobs far outside computer science fields.

According to the report, Agriculture and construction jobs have the least level of digitization and lower pay. While Finance, insurance and media are among the industries that require the most tech ability, and saw some of the highest growth in digitization in the last decade.

Why Jobs requiring more Tech proficiency pay more?

Before now, Instagram users on mobile web could only view Stories, which have to be created in the native app. But starting today, mobile web users can post to Instagram Stories, though the use of Instagram’s augmented reality masks or videos sharing isn't allowed.

While videos and the rest of Instagram’s creative tools like doodling, stickers, location tags, polls, and more are still restricted to the native app.

But users can snap photos, overlay text captions, and share them with friends from mobile web, as well as Save posts they find in the feed.

The company intends to improve more on the creative tools, and the restriction to native app stems from the fact that it might be awkward posting vertical portrait images from a landscape mode, albeit there's no confirmation if photo posting will come to the desktop.

In the next few weeks, users will be able to find a camera icon in the top left corner which they can tap to shoot or upload a photo for their Story. On tapping the bookmark icon on feed posts sends it to the Saved folder that’s accessible from the top right of the screen.

They can also add text and change the text color. When they’re done, post it to Stories by tapping “Share to Your Story”.

Instagram Stories will definitely receive a boost, and as competitors are hugely digging into the creative feature, it could potentially lock into the developing world audience through the mobile web.

Instagram brings photo-only posting to Stories on mobile web

While Amazon's Alexa is notorious for third-party app support features, Google is now towing same line as it woos more developers on-board, with slew of new features to incentivize the community.

Google's step towards making Assistant a better experience, includes: a new push notifications, daily updates and additional language support.

The speaker-to-phone transfer capability is perhaps the most compelling of the new features, and it's possible via a new API that can start an action on a Google Home speaker and complete it on the phone. And the updated version of the Cancel command enables the app to send a polite farewell before logging off.

The API enable apps to send important updates to users on the phone, while the Google Home speaker support is still coming in the future.

For "Families badge" is also a new feature, which designates if an app is okay for the little ones and have support for additional languages, including Spanish, Italian and Brazilian Portuguese.

All this is coming on the heels of the recent Google addition of a number of new features to its Smart speakers offerings, along with the Pixel Buds, which have already started shipping.

Google Assistant gets tones of Third-party App support features

While Google's machine learning proprietary system, TensorFlow is now a core part of its commercial products like Google Voice Search, Google Photos, and YouTube, the company has shared the developer preview of TensorFlow Lite targeting mobile devices.

First announcement at Google I/O back in May, TensorFlow Lite is built from the ground up for mobile devices.

The company says it's an evolution of TensorFlow for mobile, as it's aimed at creating a more lightweight machine learning solution for smartphone and embedded devices.

TensorFlow Lite also supports the Android Neural Networks API, with emphases on a lightweight product that could initialize quickly and improve model load times on a variety of mobile devices.

Albeit, this is the developer preview, so isn't the full release, there’s still much more to come as the library is further expanded.

For now, TensorFlow Lite is tuned and ready for a few different vision and natural language processing models like MobileNet, Inception v3 and Smart Reply.

It’s currently available for both Android and iOS app developers, so developers can take a spin into the TensorFlow Lite now.

Google targets mobile devices with TensorFlow Lite

The Android system in the effort of making specific features or functions easier for people with disabilities to use, allow developers to access core components via the Accessibility services.

While many developers take advantage of the powerful API to enhance the functionality of their apps, which quite often aren't aimed at helping disabled users.

And there are over a thousand of such apps in the Play Store that add, enhance, and tweak functionality to make the Android experience much better.

But an imminent change from the Play Store terms could put an end to it all, Google is preparing to shut down universal access to its accessibility services APIs within the next 30 days, according to a report credited to Android Police.

The report claims that mails were sent out to a number of developers, that apps using accessibility services which are not explicitly being used “to help users with disabilities use Android devices and apps” may be booted from the Play Store unless the functionality is revoked.

This shutdown may actually impact Android users' experience, as there’s high probability that many of their favorite apps uses accessibility services to enhance functions.

Albeit, your favorite apps will not lose their functionality, but it does present some difficulties.

Google warns that “repeated violations of any nature will result in the termination of developers account, and investigation and possible termination of related Google accounts.

Google's imminent shutdown on Accessibility Services APIs permission

While the iPhone X’s new form of device security, Face ID has been touted as the ultimate flagship, it has naturally become a target for hackers. A Vietnamese security group, Bkav claims to have cracked Apple’s facial recognition system using a replica face mask that combines printed 2D images with three-dimensional capabilities.

Bkav pulled this off using a consumer-level 3D printer, a hand-sculpted nose, normal 2D printing and a custom skin surface designed to trick the system, as contained in a video proof.

Apple on its part, appears to be pretty skeptical about the purported hack, as there are quite a few ways the video could have been faked, the obvious would be training Face ID with the mask prior to attempting the unlock.

The company, as if in anticipation of such maneuver, had quipped that Face ID matches against depth information, and isn’t available in print or 2D digital photographs. As it’s designed to protect against spoofing by masks or other techniques through the use of sophisticated anti-spoofing neural networks.

It doesn't really matter whether Apple Face ID "learns" new images of the face, since it will not affect the claim that Apple Face ID is not an effective security measure.

Bkav claims to have applied the strict rule of "absolutely no passcode" when crafting the mask, and to give a more persuasive result, given that Face ID will take additional captures over time and augment its enrolled Face ID data with the newly calculated mathematical representation.

The questions remain that no one knows how legitimate this purported hack really is, and the iPhone X would have locked after just five failed trials to unlock using Face ID, but it’s unclear how many times Bkav made an attempt.

Additionally, the Face ID is attention-aware and recognizes if your eyes are open and looking towards the device, thus making it more difficult for someone to unlock your iPhone even when you are sleeping.

The group, however did not share its findings with Apple, albeit no official statement has been made acknowledging or rejecting the claims by Apple.

How Apple's iPhone X Face ID was beaten by a mask

Microsoft's next major update for Windows 10, codenamed Redstone 4, is set to arrive early in 2018, with the latest Insider build 17035 doing away with “sneakernet”, now replaced by a feature called “Near Share” plus a phone-like auto-suggest feature for text typing.

While the company have already started bringing new features via the Redstone 4 update, it's expected to be a much bigger upgrade compared to the earlier Fall Creators Update, and will bring back some features like Timeline which was rather cut off from the Fall Update.

Meanwhile, Windows Insiders have been trying out the new OS update, and some of it's notable features, such as: Near Share, a new Audio settings menu, and the ability to configure bandwidth.

What is Near Share and How does it work?

Near Share is a handy feature that makes it possible for your PC to seek out nearby PCs and offer to share a file. It makes it pretty easy to just haul out a USB key and copy a file over, replacing Bluetooth, and shareable to just about any medium, through email, Facebook, and more.

To get started, it's currently available with Insider build 17035. Once installed, then you’ll need to turn Bluetooth on, making sure that the “Near Share” button in the Action Center is toggled on. Next, every time you see the “Share” icon, you’ll have the option to share the file or URL with a nearby device.

Ability to Configure Bandwidth

The Insider build 17035 extends the Fall Creators update function that allowed you to adjust the bandwidth used by Windows updates, basically different from downloads you started, like apps you downloaded from the Store.

Instead focusing on major things, many Windows users are preoccupied with rather unnecessary bandwidth consumption from rarely used app download such as games.

Other stuff includes: the main Audio settings ability to set Windows sounds, and so forth has been migrated into the Settings menu, out of the Control Panel. And the touch keyboard now includes a couple of improvements, like a new “Acrylic themed" background, as well as extending support to over 190 new foreign-language layouts.

Additionally, there's now support for text suggestions from the Japanese-language AI chatbot Rinna, which will suggest phrases as you type. But, don’t expect same function to be extended to American users anytime soon.

A Peek into Microsoft's Windows 10 Redstone 4 update

Remember Facebook's Events App? The company has rebranded the Events app as Facebook Local, bringing events and places to a dedicated search hub powered by Facebook’s 70 million business pages with friends’ reviews.

Already live in the U.S. on iOS and Android, Facebook Local will help users discover bars, restaurants, and nearby attractions, making it easier to do looks ups that are very common when planing an outing with friends.

While Facebook Local’s main page shows shortcuts to several nearby restaurants, cafes, drinks, attractions, and more, as well as where friends and people, allowing you to pick between great bars at a block, or find out if one has a live band playing.

It includes a calendar of your day’s Events, a "Trending Events" feed, guides to music, nightlife, art, and other happenings, and options to see everything happening on certain days.

Facebook Local, as alternative to Yelp or Foursquare, offers a wide range of local business data with your social graph and user generated content like photos and reviews, which in turn enable businesses to get more ROI in their Facebook Pages, as it turn all their offline activities into Facebook Events, and seek more social interaction from their customers.

For now, however you can’t order directly from Local like with the Facebook app’s feature, but restaurant Pages do include a link to a deliverer.

Albeit, it improves something like the ability to search for ‘cappuccino’ not just ‘cafes’, those “enhancements reflect back immediately” and actually promotes the given local business.

Facebook could have finally given Events the spotlight they deserve within the local business listings, as it's “Bringing the world closer together” few things do that as vividly as a mobile app.

How Facebook Local will impact small businesses

Twitter’s former limit of 140-character is rather a holdover, but the company finally expanded tweets to 280 character, and now, it fixes another important issue with the support for 50-character display name.

While the username function doesn’t change a user’s Twitter handle, but it could be useful for users with long names.

The move may be far more impacting that the 280 character expansion, and will help unlock engagement; albeit many are disappointment that the company hasn't given much attention to long-standing issues like: trolling, or abuse on the platform.

The increase in the length of a display name to 50 characters makes room for adding a middle name or even a few more emojis.

However, it only applies to display names and not to the usernames (the username is what follows the @ symbol, often referred to as a "Twitter handle"), which still remains, unwittingly pegged to 15 characters limit.

Just after 280-character support; Twitter expands to 50-character display name

Everyone Can Code is an initiative launched by Apple earlier in the year that gives everyone the power to learn, write, and teach coding, with the aim of helping everyone bring their app ideas to life. Apple announced the expansion of "Everyone Can Code" curriculum to more than 20 colleges and universities outside of the United States.

The program offers App Development with Swift Playgrounds, as a full-year course designed from the ground up by Apple engineers and seasoned educators.

While the company specifically mentioned that RMIT University, Australia’s largest higher institution, is “one of the broadest international deployments” of the curriculum. Going forward, RMIT will offer the App Development with Swift Playgrounds curriculum via RMIT Online, with a new vocational course available on the campus.

According to Apple's CEO, Tim Cook, Apple is giving students the tools they need to help change the world, with the global roll-out of the Swift curriculum. Tim commented, Thus:

We launched the Everyone Can Code initiative less than a year ago with the ambitious goal of offering instruction in coding to as many people as possible. Our program has been incredibly popular among US schools and colleges, and today marks an important step forward as we expand internationally, We are proud to work with RMIT and many other schools around the world who share our vision of empowering students with tools that can help them change the world.

Other international universities that will also be offering the curriculum include: Hogeschool van Arnhem en Nijmegen (Netherlands), Mercantec (Denmark), Unitec Institute of Technology (New Zealand) and Plymouth University (United Kingdom).

Under the program, Apple will also offer coding curriculum to students in elementary school, middle school, and high school.

Apple extends ‘Everyone Can Code’ Initiative to more Students around the World

Skype has introduced a new feature called “Photo Effects” (or rather photo stickers), which comprises of face masks, decorative borders, and witty captions, among others.

The Photo Effects feature is a direct take-on on Instagram and Snapchat, but unlike the photo stickers on the other social apps, Skype suggests the stickers to use based on the photo’s content, day of the week, or other  such criteria.

It is based on a technology introduced earlier by Microsoft in its camera app called Sprinkles. While the Sprinkles app leverages machine learning to do things like detect faces in photos, suggest captions, figure out someone's celebrity look-a-like and determine age and emotion.

The app also lets you swipe through the suggestions, with various props to add to your photo, funny captions, and stickers displaying your celebrity look-a-like, among others.

Skype suggestion of its photo effects is automatic and on a press of a button; as you swipe through the suggestions, you’re prompted to add items like a smart face sticker, your location, a caption that references the day or even a mystery face swap.

And the image thus created can be shared with friends in a conversation or posted to Skype’s new Highlights feature, which is rather an Instagram/Snapchat Stories clone.

The Photo Effects is launching on Skype for Android and iOS, and the roll out have already started, but will reach all users over the coming week.

Skype pitches Photo Effects to take on Instagram and Snapchat

The App Store metrics as at November 6, puts iOS 11 as now installed on 52 percent of active iPhone and iPad worldwide. While 38 percent of devices still run iOS 10, with only 10 percent using an earlier version of the software.

Apple had delayed updating the App Store support page, which formerly update follows immediately after the release of a new iOS version, even with launch of the iPhone X last week, notwithstanding.

Albeit, the company reported roughly about the same figure last year with 54 percent of users running iOS 10.

Emojis remain the main attraction with new iOS update, as most users don’t want to be left out of the fun. Apple added new emojis as part of iOS 11.1, which is equally true for iOS 10.0 with new emojis.

While emojis have become such an evolving part of the funky culture, and many users want to be able to enjoy it all and not just having regular marks for missing characters.

Apple is close to releasing iOS 11.2, which will arrive with Apple Pay Cash - Apple's peer-to-peer (P2P) payment in beta, allowing to send and receive cash in Messages for U.S. customers.

Apple metrics puts 52 percent of iPhone and iPad as running iOS 11

Opera's plan to rethink and modernize the browser as part of its Reborn project continues, as the desktop browser will now let you stream 360 videos to your Vive or Occulus headset.

While the Reborn Opera introduced a slew of new features, the browser's usage share isn't taken off, as it's still playing catch up to Chrome, Safari and Firefox.

Other previous features include built-in browser support for chat apps like: WhatsApp, Facebook Messenger, and Telegram, among others. Opera's latest update for VR support, will enable users stream 360 degree videos to their HTC Vive or Occulus headset, as well as any OpenVR devices.

According to the company, Opera desktop browser has grown by 65 percent in the US, 64 percent in France, and UK by 50 percent.

Besides VR Support, Opera browser now lets you edit screenshots, as well as letting you snap selfies with your laptop. You can download the new Opera browser now on the official website.

Opera brings VR Support and 360 video streaming capabilities

Snapchat, though favored by youth demography, but competition from Instagram and the fact that Snapchat is difficult to understand or hard to use, has made it a turn off for the more matured populations.

Snap is harping on a new enhancement that will provide its over 170 million active users with personalized experience, leveraging the benefits of machine learning without compromising the editorial integrity of the Stories platform.

According to the company, it now index millions of Stories every day through its work around Search and Maps, availing it the long tail of content necessary to provide a truly personal experience.

Also, a facelift is eminent as Snap acknowledges the difficulty of users to understand its app, prompting the team to consider a redesign for the messaging app to make it easier to use.

Snapchat growth slowed to 13.8 percent, 14 percent in Q1 2016 and 17.2 percent in Q2 2016 to reach 143 daily active users, which isn't enough to satisfy investors or keep up with Facebook’s Instagram.

The company is betting that a makeover will be disruptive in the short term, but maintains that they can't predict the behavior of its community of users, which may change with the updated application.

If the redesign is screwed up, Snapchat could see more teens abandon the app, even as Instagram is making it harder for Snap to gain foothold in the matured age groups.

Snapchat’s content is scattered across the Stories list, the Discover channel, search and Snap Map, and the algorithmic Stories feed was implemented to make it easier for people to find things they care about, instead of using the reverse-chronological feed that shows what have been posted most recently.

Snap contemplating a redesign for Snapchat to overturn failings

Apple Pay's peer-to-peer (P2P) payments beta is live on iMessage, starting today users can send and receive cash right within the Messages app on iPhones.

Now available on iOS 11.2 beta 2, once you’ve updated, you’ll see an Apple Pay button in the apps section of Messages that allows you to initiate a payment.

Albeit, you'll have to opt in using the iOS Public Beta program, available for U.S. customers only with iOS devices on 11.2 or later and with two-factor authentication set up on their Apple ID.

While payments is triggered by simply asking for money in a message or tapping on a message sent by someone else asking for money. The funding could be from any debit or credit card added to Apple Pay, and Apple charges no fees for money that is funded through debit cards.

But an ‘industry standard’ fee for credit cards, likely in the few percent will be required.

For the initial time someone sends you money, you will have to opt in to accept it and be issued a new virtual Apple Pay Cash card, which can only be used to send money or pay for things via Apple Pay.

Also, the card functions as a transaction log for all your Apple Pay purchases on the web or at the store.

Apple Pay Cash simply offers the vital person-to-person extension for Apple’s payments system, and it will be interesting to see how it influences the behavior of people who have mixed iPhone/Android connections.

Apple Pay's peer-to-peer (P2P) payment launches in beta, allows sending and receiving cash in Messages

For most users who upgraded to iOS 11.1, the experience wasn't all that smooth as initial typing of the letter "i" is replaced with the letter "A" and a Unicode symbol, instead of auto-correcting to a capital.

While the bug is persistent through several apps, quite oddly, not everyone who upgraded to iOS 11.1 is affected.

The annoying bug, though is affecting most users, and Apple has provided a temporary workaround until a permanent patch is made available.

For those affected, who after updating their iPhone, iPad, or iPod touch to iOS 11 or later and discovered that when they type the letter “i” it autocorrects to the letter “A” with a symbol, find below what you can do to temporarily fix the issue until a software update is released to fix it.

    • Go to Settings > General > Keyboard > Text Replacement.
    • Tap .
    • For Phrase, type an upper-case "I". For Shortcut, type a lower-case "i."

If successfully done, when you type "i" and hit space, it would automatically correct itself to "I".

The workaround makes use of the Text Replacement feature built into iOS, and requires you to create a text replacement that would automatically change the lowercase "i" to a capital, thereby overriding the bug.

Update: Apple has just sent out a patch that fixes the bug with iOS 11.1.1 now available, marked as the flagship feature.

Solve Apple iOS 11.1 autocorrect 'I' bug with this workaround

While most platforms can’t run JVMs, restricting the use of Kotlin to JVM-friendly platforms like Android, JetBrains has made available the Kotlin/Native technology, which makes native binaries for Kotlin code to run without a Java virtual machine.

The beta version of the CLion IDE allows compilation of Kotlin programs directly to an executable machine-code format.

Albeit, Kotlin/Native uses LLVM compiler technology to generate machine code; with back end, runtime implementation, and native-code-generation facility provided in the CLion IDE beta.

Kotlin/Native preview’s supported target platforms include MacOS, iOS, Ubuntu Linux, and Raspberry Pi.

And developers can use Kotlin/Native by downloading CLion 2017.3, the beta version JetBrains’s IDE for C and C++ development. Also, they'll need to download two plugins, for Kotlin and Kotlin/Native, from the JetBrains Plugin Repository to add Kotlin and Kotlin/Native support to CLion.

The CLion allows to configure and use several toolchains (i.e. CMake, compiler, debugger set) and link them to the appropriate CMake Profile.

To use necessary CMake Profile during Running/Debugging your app, simply select necessary CMake Profile in the run configuration switcher on the toolbar.

JetBrains’s IDE for C and C++ development brings Kotlin/Native technology

While Google had just recently ungraded the kid-focused and stripped down version of YouTube, YouTube Kids, including addition of several new features designed to reflect the app’s aging user base, recent reports doesn't sound good for the media streaming platform.

The controversy is surrounding how the tech giant's algorithms and AI features, could allow troubling videos to get past its filters on an app designed primarily for youngsters.

The report credited to The New York Times, pinpointed a dark side to YouTube Kids, with a video showing Mickey Mouse in a pool of blood while Minnie looks on in horror. While another video, has a claymation version of Spider-Man urinating on Elsa, the princess from "Frozen."

As those ain't anything to worry about, then comes “PAW Patrol,” a Nickelodeon show that is popular among preschoolers, screaming in a car. The vehicle hurtled into a light pole and burst into flames.

The 10-minute clip, “PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized,” was a nightmarish imitation of an animated series in which a boy and a pack of rescue dogs protect their community from troubles like runaway kittens and rock slides. In the video Isaac watched, some characters died and one walked off a roof after being hypnotized by a likeness of a doll possessed by a demon.

Albeit, the offending videos are just a tiny fraction of YouTube Kids’ universe, they are typically an instance of abuse on digital media platforms that rely on computer algorithms, to filter the content that appears in front of users, most especially young people.

YouTube, however have tagged the content as "unacceptable," but said its distribution isn't rampant.

Meanwhile, for parents who are concerned about what their kids are watching on YouTube Kids, there are additional parental controls to limit what the kids view, allowing to block specific videos or channels and turning off search.

Google under Scrutiny as Horror Videos appear on YouTube Kids

Microsoft's open source code editor, Visual Studio, is designed as a streamlined editor for code debugging, running tasks and version control; as workflows require the use of full-featured Integrated Development Environment (IDE).

While the roadmap for Visual Studio includes better performance, reduced memory consumption, and support for JavaScript and TypeScript.

The multilanguage code editor, performance-oriented plans also include support of language packs for community-contributed translation, and more diagnostics capabilities.

Microsoft aims to improve the discoverability in the use of TypeScript to type-check JavaScript code, with improvements in source maps so that it can be more precise and provide variable mappings. As the Visual Studio editor would gain the ability to organize and remove unused imports for the two languages.

It wants to enhance support for splitting and viewing multiple terminals, along with source control integration improvements that include the ability to view changes inside the editor using a peek/inline experience.

Microsoft also intends to improve both the extension recommendation system and searching, so that tracking issues caused by extensions would be simplified.

The initial version 1.0 debuted in April 2016 and supports Node.js, JavaScript, and TypeScript. Now, you can download the latest version of Visual Studio from the project website.

Microsoft’s roadmap for Visual Studio include JavaScript and TypeScript

Already, Google Search affords users custom updates on live matches and EPL Table standing among others; and currently under testing is a new feature that will compare specifications of smartphones when a user searches for two devices.

The company seems to be testing the comparison feature for some users, while searching for two devices with "vs" in the middle (for example, "Pixel 2 vs Pixel 2 XL") brings up a comparison chart.

Albeit, it doesn't seem to work with three or more devices, as there's only a mode to highlight differences between just two devices.

As spotted by AndroidPolice, the device comparison feature is visible on the main results page, and tapping the blue button expands it to show every detail.

The new feature, however seems to be rolling out, and when active for all, may include comparison for other popular mobile brands.

Google testing highlight of differences between devices in search

Twitter blocked the search for the word “bisexual” in photos and news earlier, which angered quite not so few people, but the company responded that the removal of the search results for certain terms were “an error.”

While Twitter claimed it was due to an error, some users aren’t quite convinced as to the rationale behind such an incidence.

As the removal could be seen as a form of discrimination and purposely disallowing people to search for items such as images and news articles of the term labeling one’s sexuality.

According to Twitter, the teams are working quickly to resolve it:

Twitter, though maintains that they don’t know how this happened, even as the removal only seems to affect the label of sexuality.

The company still have other seriously derogatory terms like “Hitler,” or “Nazi” which were not removed; neither the Russian bot problem, seemingly taking ages to fix.

Twitter's removal of “bisexual” in search for photos and news

Apple's revolutionary new feature on iPhone X, Face ID allow you to unlock your phone by looking at it, but just how much facial biometric can reveal about the bearer is quite huge.

The new front-facing sensor module within the ‘notch’ enables the smartphone to sense and map in-depth facial features, which inevitably, communicates a lot about its owner without them necessarily realizing it.

Now, this dexterous piece of software called the TrueDepth camera system, is to power a new authentication mechanism based on a facial biometric, Face ID, which iPhone X owners are required to register their facial biometric by tilting their face in front of the camera.

The Face ID also replaces Touch ID for Apple Pay and other apps that uses it to authenticate users, like banking apps.

Albeit, Apple claims it does not have access to the depth-mapped facial blueprints that users enroll with when they register for Face ID. As the mathematical model of the iPhone X user’s face is encrypted and stored locally on the device in a Secure Enclave.

But Face ID learns over time, the additional mathematical representations of the user’s face may also be created and stored in the Secure Enclave after successful unlocks — if the system deems them useful to “augment future matching”.

While developers trying to incorporate Face ID authentication into their iOS apps does not have access to the Secure Enclave; the authentication is via a dedicated API which returns only a positive or negative response after comparing the input signal with the Face ID data.

Apple’s engineering and security systems behind Face ID’s architecture should given you confidence that the core encrypted facial blueprint to unlock your device and authenticate your identity with all sorts of apps is never shared with anyone.

Does Apple's Face ID facial biometric raise privacy concerns?

Facebook have been an unconventional breaking news source, as the leading social platform may be gearing up for a new “breaking news” tag feature that will enable publishers to bring their news posts to the right audience.

The company which is constantly testing new products, is working on a "breaking news" feature, though no official statement has been made about it.

First spotted by Matt Navarra, director of social media at The Next Web, the “breaking news” tag for publishers, could serve to push relevant news stories higher into users’ feeds, or at least make them more visually appealing.

Matt, who has become a notorious leaker of unreleased Facebook products and features which he usually find in Facebook’s code or through other means, also shared screenshots on a purported payment system called "red envelop" on Twitter.

The red envelope feature is especially interesting, as red envelopes are typically used for gifting in Chinese New Year holidays, though its ability to send money on Facebook would mean an actual peer-to-peer payments system.

Albeit, the company already have payments system in place on its Messenger, but its not supported on Facebook’s main app or on the Web.

Facebook's wooing of sellers on its Marketplace, which advertisers are not allowed to buy ads specifically for the Marketplace section, may therefore mean making people to actually pay for a purchase on Facebook would be the monetization scheme.

Facebook “Red envelope” and “Breaking news” features coming soon

Kotlin, the newly endorsed language for Android app development, has hit version 1.2 and offers an experimental feature enabling reuse of code across platforms, as well as compatibility with the Java 9 module system.

The experimental multiplatform projects capability will allow developers reuse code between target platforms: starting with JVM and JavaScript, and native support, coming later.

While code compilation is produced for both the common and platform-specific parts, and placement can be either in a common module; or platform-dependent parts placed in platform-specific modules.

And developers can express dependencies of common code on platform-specific parts via expected and actual declarations.

The declaration specifies an API, with an actual declaration either platform-specific to the API or a type alias that refers to an existing implementation of the API in an external library. While the standard library features the kotlin.math package for performing mathematical operations in cross-platform code.

Kotlin.math package also offers better precision for math polyfills for JavaScript, with the standard library compatible with newly introduced Java 9 module system, which forbids split packages (multiple .jar files declaring classes in the same package).

In version 1.2, the kotlin-stdlib-jdk7 and kotlin-stdlib-jdk8 artifacts replace the old kotlin-stdlib-jre7 and kotlin-stdlib-jre8.

It eliminates the deprecated declarations in the kotlin.reflect package from the kotlin-reflect library to make support for Java 9 possible.

Other improvements in Kotlin 1.2 include the option to treat all warnings as errors; and enabling the compiler to use information from type casts in type inference. This is especially relevant for Android app development, for the compiler to correctly analyze findViewById calls in Android API Level 26.

The Kotlin version 1.2 release candidate is now available for download on GitHub.

Kotlin 1.2 Release Candidate now out; with support for Java 9

Following Thursday's saga in the U.S. president's personal Twitter account deactivation, Twitter said it has “implemented safeguards" to prevent this sort of mishap from happening again.

While Trump’s Twitter handle has been a subject of mixed reactions, not only because the U.S. president makes the most controversial tweets, but because he has tweeted more than 36,000 times.

As such, the attempt at halting the controversial tweets were cheered by many; but others were worried concerning the lack of employee culture at Twitter.

The company in a tweet on Friday, confirmed the new safeguards, thus:

Though, it's understandable that Twitter gave some employees the rights to suspend the accounts of bots, and those who engage in violation of its rules; but wouldn’t it have at least required extra check on the deactivation of such a public figure?

Twitter has been grappling with increasing issues of trolling and abuse, whilst the social media company has had the most difficult time tweaking its rules to protect the experience and safety of its users.

The company, for the first time, has made public how they communicate with people who violate their rules, and how the enforcement processes work.

Twitter implement "safeguards" on Trump Account to Prevent Deactivation

YouTube Kids is the kid-friendly version of YouTube which avails youngsters a safer way to browse YouTube instead of giving them full access to the main app. While that more filtered version of YouTube was first introduced in 2015, it was criticized in the past for not fully locking down the YouTube experience.

Now, the updated app has added several new features designed to reflect the app’s aging user base, with customizable profiles based on the kid’s date of birth, as well as more controls for parents and kids as regards privacy and security.

It spots a new streamlined design, and curated selections of kid-appropriate content from publishers like DreamWorks TV, National Geographic Kids, Reading Rainbow and Thomas the Tank Engine, among others.

The new parental control feature eases access for parents, instead of the toggle off the app’s search to set private passcode or using the default setting, which spells out numbers as words for parents to enter; they can now sign in with their Google account in order to create customizable profiles for their kids.

YouTube Kids app changing of its looks based on the kid’s age, is especially useful for parents with multiple kids.

And because YouTube Kids itself, by default, looked like an app that was designed more or less for preschoolers than the school-age crowd. Overall, it’s safer for kids to browse YouTube Kids compared with giving them access to the main app.

YouTube Kids now give Youngsters own profiles, and more controls

Adobe Digital Insights (ADI) team, as part of its annual “Holiday Predictions” report, analyzed 1 trillion visits to over 4,500 retail websites, 55 million SKUs, and 12 million social mentions. While the results aren't quite surprising, ADI predicts online holiday sales will reach $107.4 billion, which is a 13.8% increase from the same period last year.

According to ADI, online growth will continue to outpace overall retail growth during the holidays — 13.8% online vs. 3.8% overall.

They also conducted companion survey research with 1,100 U.S. consumers, to bring you the following predictions about consumer online purchase habits this holiday season. From this aggregated and anonymized data, 31% of shoppers reported that they are planning to spend more online this year than last year.

With more than half of those visits to shopping sites coming from smartphones and tablets, which will surpass desktop computers for the first time.

It means if you visit a U.S. retailer’s website this holiday season, you’re most likely to do so on a mobile device, with mobile having a bigger share of the predicted $107.4 billion in online holiday spending this year, up from $94.4 billion last year.

Albeit, there’s still a disparity between mobile retail website visits and revenue, but the move toward mobile shopping is inevitable.

While desktop purchases still account for two-thirds of revenue year-round, mobile is often the starting point for many shoppers.

The key is to make the mobile experience helpful to the customer, wherever they are in their shopping journey. And the retailers that deliver the best experience are the ones that will close the deal.

Adobe's Predictions for the Holiday Shopping pitches mobile over the desktop

The U.S. president's personal Twitter account, @realDonaldTrump disappeared from the social network Thursday afternoon, but contrary to any rule violation, Twitter attributed the deactivation to human error by its employee.

While visitors to Trump's page were greeted with a rather unexpected message that the page didn't exist. Albeit, the account's deactivation didn't last quite long, as the page was promptly restored with its usual appearance.

Twitter, however responded via its @TwitterGov handle, with the message that Trump's account "was inadvertently deactivated due to human error" before later placing the blame on an employee spending their last day with the company.

Twitter, though had earlier acknowledged that Trump's tweet had caused an uproar but maintains it was allowed to stay because of its "newsworthiness."

The company's rules forbid using the service to make violent threats, either directly or indirectly. As any account violating that rule may be subject to a temporary or permanent suspension, Twitter warns.

But many observers have wondered why some of Trump's tweets cum account weren't deleted by the social media platform, despite their apparent violation of its rules.

Twitter attributes human error in deactivation of Trump's account

Snap unveiled a dancing burger AR filter, which follows last month rolled out of the new product called Lenses, as a troll on Google and Apple, which row erupted over the weekend about the order in which the ingredients are stacked in both Apple and Google's burger emoji.

Snapchat burger is actually superior to that of Google and Apple, as most burger-munching peeps prefer their burgers stacked from top-to-bottom in this order: lettuce, cheese, and then the meat.

Apple's emoji has the cheese firmly planted on top, with the lettuce underneath the burger.  While Google's burger emoji has the cheese placed underneath the burger patty, with the lettuce leaf atop all other ingredients.

Snap's dancing burger lens has one obvious edge over that of Apple and Google: the two burger patties.

Albeit, Snapchat opted for this order: lettuce, cheese, and burger, which might be completely wrong too. Twitter users battled it out over the weekend and into early this week, with one user reaching this verdict:

Indeed, there's something about Snapchat's burger dance moves that screams "trolling"!

Most obviously, Snapchat is taunting Google and Apple. Though, it does not have its own emoji, but it does have AR lenses. And a good sense of humour, too.

What's not so crunchy about Snap's dancing AR burger

Apple just pulled a fast one, with its recently launched digital wallet and payments service, Apple Pay, as it nabs 90 percent of all mobile contactless transactions everywhere the service is active.

With Apple Pay now in 20 markets worldwide, which represents a whole 70 percent of the world’s card transaction volume, underscores Apple's approach to moving first to markets where the mobile money has kicked off.

Apple Pay allow instant payment by just tapping the terminal with your credit card instead of swiping and waiting for payment to the mobile device.

While Apple Pay has admittedly had fair share of bumpy roads, and bound to encounter more as it continues roll out around the world, mobile contactless payments has been a favored payments method by a wide margin.

The Apple advantage points to using phones as a proxy for a card or cash, and there is some anecdotal evidence that it’s working.

And the fact that merchants and others who have partnered with Apple say that Apple Pay is accounting for 90 percent of all mobile contactless transactions globally in markets where it’s available.

Apple also announced plans to launch the service in Denmark, Finland, Sweden, and the UAE in the next few days, which brings the total number of countries where it is operated to 20. And there are now 4,000 credit and debit card issuers whose cards can be uploaded and used with Apple Pay.

Apple Pay hits 90 percent of Global Mobile contactless transactions