Mozilla research project offshoot, WebAssembly was first announced in mid 2015 as a low-level programming language for in-browser client-side scripting, designed to be faster to parse and execute than JavaScript.

But, JavaScript developers are still to get their feet wet with WebAssembly after 4 years, according to a survey published in the State of JavaScript 2019 report; albeit WebAssembly has drawn lots of attention but not many developers have used it.

The report noted that among JavaScript developers, only 8.6 percent (1,444) of the 16,694 developers stated that they were aware of WebAssembly, even as WebAssembly has been touted as a mechanism for speeding up web applications, with support for languages like C, C++, and Rust for client-side and server-side web development.

WebAssembly provides a binary code format that's smaller over the wire, loads faster, and better than JavaScript as regards executing intensive computational operations on the browser.

The State of JavaScript 2019 report is based on a survey of 27,717 JavaScript developers from around the world, including everything from JavaScript features to frameworks, utilities, and testing tools. It's major highlights are as follows: Service workers, which is supposed to act as proxy servers between web applications, the browser and network, was not used by 54 percent of the respondents, even though they were aware of it.

And 58.5 percent of respondents said they had used TypeScript, and would still be using it; while 35.8 percent had used the Angular JavaScript framework, but stated that they would not use it going forward. Only 21.9 percent said they would continue to use Angular JavaScript framework.

It is pertinent to note that dissatisfaction with Angular JavaScript framework had risen among developers since the 2017 report.

Other interesting reports include that of Electron, a tool used for building cross-platform apps for desktop with JavaScript, HTML, and CSS, haven been used by 25.6 percent of respondents who also confirmed that they'd use it again. Though Electron had trailed React Native framework at 27.2 percent.

The React JavaScript UI library got the affirmative from 71.7 percent of respondents, who also stated they would still use it again.

Additionally, JavaScript promises was used by nearly 95 percent of the 20,543 developers who stated they were aware of it, while the Local Storage browser API was used by 89.5 percent of the 20,021 developers. The full report was produced by Sacha Greif and Raphael Benitte, and can be found here!

State of Code: JavaScript Developers still to Get their Feet Wet on WebAssembly



The world's first pidgin-to-English machine translation model has been developed by the duo of Nigerian AI engineers, Kelechi Ogueji and Orevaoghene Ahia, who are both colleagues at InstaDeep, a Lagos-Nigeria based AI research firm.

As perhaps the only successfully developed model for the translation of pidgin to English, the project was published and accepted at the NeurIPS conference 2019, the world’s largest gathering of artificial intelligence researchers and enthusiasts. The field of machine translation (MT) which is quite different from machine-aided human translation (MAHT) is part of computational linguistics that are involved in the translation of one language text or speech to another.

While for basic level, MT offers simple substitution of words in a language for words in another, which usually cannot produce a good translation of contexts because the recognition of phrases and the closest counterparts in the particular language is missing. Therefore, the solution lies with neural techniques, which is a rapidly growing field that has led to improved translations, capable of handling differences in linguistic typology, idioms, and isolation of anomalies.



For instance, in order to build an English-to-French translator, you need a tons of sentence pairs for both of the languages. But with little pidgin English used online in writing, finding a decent translation data for the project proved to be major hindrance.

How did the Duo Brave the odds?



The supposed major hindrance in finding pidgin data lead them to train an Unsupervised Neural Machine Translation (UNMT) model, which essentially means the creating of a pidgin-English catalog of word-pairs from scratch, scraping about 56,695 pidgin sentences and 32,925 unique words from different websites.

Spurred by Ogueji's interest in working on language translations since encountering Google Translate in 2014. Albeit, this project is the first Natural Language Processing (NLP) project in the world done on the pidgin-English by any developer or group. Despite the fact that AI development is still at a rather embryonic stage in Nigeria, with scarce research data, too raw to make any sense where available.

But, beyond the seemingly insurmountably hindrances, they were able to find solace in frameworks like Word2vec and Google’s transformer, which gave their ambition a solid foundation.

What's the application of Pidgin-to-English Translation Model?



Given that natural language processing (NLP) is becoming a bit more practical with modern technologies, such as machine learning deployed in voice-to-text platforms like Google Assistant, Siri and Alexa. Other fields requiring communication can be improved upon using natural language processing, for instance, branding agencies can apply NLP in order to know exactly what people feel about a campaign.

And the best tools used in digital marketing are incorporating advancing NLP techniques and the age-old linguistic concepts. As most of the analysis of digital audiences is based on high-resource language models, the nuance could be lost when analyzing such online chatter. Like Nigerian Twitter contains a lot of ‘Nigerian Pidgin English’ with its very distinct lexicology, so the application in digital marketing is enormous.

But owing to visa delays, the duos of Oguejii and Ahia were not able to present their work at the NeurIPS 2019 workshops, which would have provided an opportunity for them to network with other similar researchers from the world's top universities and firms.

The disappointments, however did not kill their drive, as they have already made the codes available on Github for anyone to contribute in building the work on NLP pidgin-English.

Pidgin-to-English Translation Model developed by two Nigerian AI Engineers

Twitter following discovery of a hack that allowed users to tweet multiple GIFs by converting a GIF into an animated PNG (APNGs) has put a stop to the use of APNGs citing some performance issues.

APNG file format is an extension to the Portable Network Graphics (PNG) specification, which allows for animated files that works similar to GIF files, with support for 24-bit images and 8-bit transparency both of which are not available for GIF files.

While APNG competes with the Multiple-image Network Graphics (MNG), which is a more comprehensive format for bitmapped animations built by the same team as PNG. But APNG's advantage is perhaps the smaller library size and the compatibility with older PNG implementations.

Also, it retains backward compatibility with non-animated PNG files, but Twitter is citing a potential cause of seizures in motion-sensitive people for suspending the use of animated PNG image files (APNGs) from its platform.



According to Twitter, there was a bug that allow users to add multiple animated images to a Tweet using Animated PNG files. APNGs ignore the safeguards put in place and can cause performance issues for the Twitter app and your device. So, the fix for the bug which has been pushed out means you will no longer be able to use APNGs with your Tweet.

The company added that existing APNG files uploaded to the platform will not be removed, however the team will look into building a similar feature that’s better for users and their Twitter experience.

Twitter Suspends the use of Animated PNG Image files (APNGs) on its platform



While the increasing technologically focused world has made working remotely very much the norm in advanced economies, with such jobs as programming and web development, the favorites for remote jobs; but that is not the case in Nigeria.

The research by Devcenter’s ‘State of Code Jobs’ reports that from over 3000 jobs on Gigson, the Devcenter’s jobs platform, about One-third (31%) of jobs posted were for back-end developers, and those on the server-side, which include work on scripting, databases, and frameworks behind a site’s functionality.

Those for front-end developers is 24%, and 20% for full stack developers while mobile developers made up about 22%; roles for internship and employers seeking developers to multi-tasks, altogether stood at 3% of all the jobs. And the required competence in such programming languages are Python, Java, and Ruby.

Why the Preference for Developers' Physical Presence



According to the report, analyzing year-on-year rise in remote job postings, availability of fully remote jobs was up 9% for all jobs postings, while 14% are supposedly remote-friendly and 56% as full-time, physical jobs and 2% of the jobs were on part-time.

The reasons given for unavailability of remove jobs by employers were concerns on stable power and internet connectivity, as they insists that software development staff must be on-premises to avoid excuses for late delivery or failed tasks.

Increase in Co-working Spaces



There is the rise in co-working spaces to fill the development gap created by unavailability of remote jobs, as developers clamor for a little bit of freedom. And there are about 117 of such places in Nigeria presently, up from 5 in about a decade ago.

These places have become the haven for freelancers and remote workers for co-working subscriptions while employers provide them as a perk.

It is pertinent to note that 19% of all the jobs were on contract roles, which typically require work for a short period of time and afterwards dismissed. As several of the companies work in projects and only needed the talents for a while, and Lagos is the place to be if you're targeting to work as a developer in Nigeria.

Nigerian Developers take-home pay



The remunerations for the developers is based on experience level, and employers, are somewhat reluctant to disclose what exactly they will pay until they've had a direct talk with the potential developer.

For employers who are more willing to reveal how much they pay before a contact is made, the average offer for junior developers is usually at the range of $200 (about N80,000 of the local currency) to $400 (N150,000) per month.

And such salary offers that are above N300,000 ($827) is mostly for advanced and highly experienced developers. The Gigson's report is supposed to serve as a help to developers to understand exactly what employers are looking after from them.

State of Code: Why Remote Jobs are still far in between for Nigerian Developers



The examination period can be stressful for lots of students all over the world, irrespective of grades. And most certainly, no one is 100 percent guaranteed of getting a good score. But it is better to be well prepared, thus 123dissertations.com provides professional help on assignments so that you can prepare on the revisions to boost your academic performance.

Any student who is adequately prepared for an exam will surely have it easy when the question paper comes to the desk. As every student targets good performance to enhance the chances of getting a lucrative career, and good performance comes from adequate preparations before the examinations.

Here are 5 tips that will enable you to prepare adequately for your College Exams.

5 Study Tips for College Exams


1. Start Early Revision


Don't wait until the last minute to begin revision for your examination. Time is a big asset, especially whenever you're about to face an exam. As time allows you to read more books, use alternative reference materials, and also to relax. So make out an ample time, because rushing at the last minute will only cause panic or anxiety, which is perhaps the worst situation to prepare for an examination.

The early revision lets you to be aware of topics that are likely to appear, and to know those you've not yet to covered. Also, you'll have enough time to revise all the topics, consult with your teacher, or even take additional measures to ensure that you master the topics.

But if you're short of time, even the areas you have mastered are more likely to evaporate, which may make preparation for your exams a bit more difficult.

2. Set Realistic Study Goals


Always prepare for exams with a particular target in mind. This calls for a timetable that you will use to set the goals and manage your time. Whether you achieve the goals or not gives you a clear idea of your preparedness. It will enable you to identify the need to increase study hours in order to cover more topics based on your performance regarding the goals.

The goals must also include the amount of work to be done. As some topics are more challenging and require more study hours. While some other topics are easy and can be completed in a flash. So, you must endeavor by all means to be ready for the exam when the date arrives.

3. Remove All Distractions


The best environment to prepare for an exam is a desk and room that is free of distractions. And note that music, television, internet (if not to aid the study), and other distractions will slow you down. They may cause you to miss the targets and eventually not be adequately prepared when the exam comes.

Prepare a desk that is well lit, and comfortable. As you are going to study for a prolonged period and prepare the mind to grasp more content. The mind will commit to memorize content that is studied in a quiet environment. But it takes quite some level of discipline to keep off distractions, like to switch off your phone, internet, music, and television, among others. This tip if followed will give you a peaceful environment to study.

4. Learn To Relax


Preparing for exams comes with immense pressure. You have to sit for long hours reading books and other reference materials. The mind and body will be fatigued from these tough exercises.

Allocate time such that you can watch a movie, engage a friend in a chat, visit a relaxing environment, or even sleep. Relaxing enables the mind to absorb materials and commit them to memory.

5. Look Forward to The Exam


Your preparation for the exam must be in consideration of the exam timetable. This allows you to plan out your revision such that you can cover all subjects comprehensively. Always consider the time between exams because it is useful for effective revision.

In conclusion, any student who prepares well for an exam will have it easy during the hours the test lasts. The mind must be prepared for the exam. You must also find the best environment to prepare so that the topics are ingrained into the memory and be available when the tests arrive.

How to Prepare for your College Exams (Best 5 Study Tips)



Google has announced Project Connected Home over IP, in partnership with Apple, Amazon, and the Zigbee Alliance, with the aim to establish a new standard to enable IP-based cross-platform communication across smart home devices.

The open-source project will among other things guarantee that all smart home devices will work, irrespective of voice assistant that is used, which makes it possible to choose between any of the popular digital assistants, including Amazon Alexa, Google Assistant, and Apple Siri for your smart home device.

While Google, Amazon, Apple, and the Zigbee Alliance joined as the Working Group, with Zigbee Alliance board comprising IKEA, Legrand, Resideo, NXP Semiconductors, Samsung SmartThings, Signify (formerly Philips Lighting), Schneider Electric, Silicon Labs, Wulian and Somfy, which are also among the Working Group who contribute to the project.

How the Connected Home over IP will Work?



Connected Home over IP Project is aimed to increase compatibility by simplifying development for manufacturers, which is around the shared belief that smart home devices must work seamlessly, be secure, and reliable. And by building upon the Internet Protocol (IP), the project will also enable communication across smart home devices, cloud services, and mobile apps, while defining specific set of IP-based networking technologies for the device certification.



The open-source approach will be taken for the development and implementation, for a unified connectivity protocol, with contributions from already market-leading smart home technologies from Google, Amazon, Apple, Zigbee Alliance, among others. These technologies is decided upon as it will help to accelerate the development, and deliver more benefits faster to manufacturers and consumers.

And the protocol will complement other existing technologies, and the Working Group will encourage device manufacturers to continue the innovation using technologies available.

Why Internet Protocol (IP)?



There is currently no widely adopted open standard for the smart home built upon Internet Protocol (IP), even though IP is the protocol of the internet and the most common network layer used in the home and offices. So, IP messages is easily routed across networks quite independent of physical and link layers; while there are ample tested algorithms and infrastructure available for performing the routing and switching, with robust and resilient firewall.

IP is a well-known protocol just like TCP and UDP, which consequently, makes it ideal for delivering end-to-end security and privacy in the communication between one device and the other, including apps, and services.

And there is a number of IP-bearing networks available, which are designed for different use cases. Given that the protocol is built on IP, the message traffic would be able to seamlessly flow across the different types of networks. Today, Smart Home devices that are in use require proprietary protocols to be tethered to a home network using both dedicated proxies and translators.

The Connected Home over IP Project Working Group may likely embrace other IP-bearing technologies like Cellular, Ethernet, Broadband, and many others. Albeit, the initial draft is expected to arrive in 2020.

Smart Home set to get Open Standard with Connected Home over IP (CHIP)



Hidden Cobra, also known as Lazarus Group, is a North Korea-linked cyber-espionage group that's notorious for WannaCry ransomware, which had the ability to spread automatically across large networks by exploiting a known bug in Microsoft’s Windows operating system.

The group had until now, specialized in malware targeted at Windows and macOS systems, now a new Remote Access Trojan (RAT) dubbed Dacls has been traced to the hacker group that affects both Windows and Linux systems. According to researchers at Qihoo 360, Dacls was traced to the Lazarus group as thevagabondsatchel.com download server was used by the group in several past attacks by the APT (Advanced Persistent Threat) group.

Dacls is rather fascinating because it is the first Linux malware by the hacker group, as no security researcher has ever disclosed any cyberattack cases on the Linux system carried out by the Lazarus Group.



The new Linux malware contains all the plug-ins needed to carry out the attack within the bot component, whereas for Windows system it load the plug-ins remotely on the affected Windows system, while securing its command and control communication channels with TLS and RC4 double-layer encryption.

Dacls also deploys the AES encryption technique to encrypt configuration files, and using the plug-ins, Dacls is capable of receiving and executing C2 commands, even downloading additional data from the C2 server; it can perform network connectivity test, scanning of networks on 8291 port, among others.

The malware was named Dacls based on its hard-coded strings and file name (Win32.Dacls and Linux.Dacls), with the open source program Socat, and working payload for Confluence CVE-2019-3396, which is speculated that the Lazarus Group used the CVE-2019-3396 N-day vulnerability to spread the Dacls Bot program.

After Linux.Dacls Bot is activated, it runs the daemon mode in the system background; while using the startup parameters /pro, the Bot PID file, /var/run/init.pid and the process name /proc//cmdline to distinguish the different operating environments.

And the major functions of the Linux.Dacls Bot include: file management, command execution, process management, C2 connection agent, test network access, and network scanning module.

It is recommended that users should patch their system in timely manner and also check if they've been infected based on the Win32.Dacls and Linux.Dacls details used by the Dacls RAT.

Hidden Cobra, the North Korea-linked hacker group unleashes Linux Malware



Microsoft's adoption of the Blink rendering engine forked from WebKit, used by Google Chrome and also Opera browser, is perhaps the company's biggest move to improve Edge's market share and position it to deliver better performance.

While the new Edge browser has same Chromium base as Chrome and Opera, it offers access to Microsoft account and features like Bing search engine by default; also supporting browser add-on and extensions, which developers are expected to submit to the Microsoft Edge (Chromium) Addons Store.

The company for the first time made a strong case why enterprise users should switch to the new browser, by outlining what is to be implemented as group policies for Windows device and a doc describing the policies for both Windows and Mac computers; with the group policies, even as incomplete as it's currently is, tailored to convince enterprises to switch to the new browser.

Release date & Update for Chromium Edge browser



Microsoft has announced a January 15, 2020 release date for Chromium Edge browser, and the stable version of the browser will come as part of Windows 10 update. And with Windows 10 updates as functionally mandatory, except of course, for those users in a managed enterprise network, all Windows PC will automatically download the update and perhaps start pestering you to complete the installation by restarting your PC.

The change won’t be a problem for most people, and if you never used the old Edge, and don’t want to use the new one, Microsoft will still respect your choice. But, Microsoft will still bundle the program onto your PC all the same, but it does offer a tool to help you stop Edge from installing on your computer with the next Windows 10 update.

Albeit, it requires a bit of leg works, though pretty straightforward; which allows you to choose when (or if) to install the new Edge browser.

How to Prevent Microsoft Edge from Automatic Installation on Your PC?



Microsoft has made available the Blocker Toolkit to help users to retain the original Edge browser (bundled with Windows 10 since its debut), at least for some time, but just like other such kits before it, it is very easily executable to run locally, also including a template that IT admins can use to block the original-Edge through the Group Policy settings.

If an organization's IT admin sets the Allow Microsoft Edge Side-by-Side browser experience policy to "Enabled" both Edge browser versions (new and old) can run simultaneously, while alternating, and users can create dual-browser setup by running the Blocker Toolkit's executable (which retains the Original-Edge) and allows the download and installation of the Chromium Edge.

Although, there are several caveats about the Blocker Toolkit, including that it prevents PCs running Windows 10 1803 or later from pulling in Chromium-Edge via Automatic Updates.

Additionally, users can manually download and install Chromium Edge after the Blocker Toolkit has been deployed, while organizations with machines running an update/patch manager, including WSUS (Windows Server Update Services) and SCCM (System Center Configuration Manager) won't be needing the Blocker Toolkit, as IT admins can use these tools to deploy or bar Chromium-Edge as well.

Microsoft's Timeline for Release & Update to Chromium Edge browser



Google was forced to halt the roll-out of Chrome 79, the latest version of its hugely popular web browser due to issues regarding a bug affecting Android WebView, which is wiping device’s app data.

While Android WebView renders web page or content within apps, the bug causes it to use new location for storing the local data, and failing to successfully migrate the existing data. Whenever a user or developer tries to open an app the relied on locally stored data on the device, their previous information would fail to show up, which appeared as if their data has been lost.

Albeit, the stored data was not actually lost, rather because Chrome 79 is referring the apps to look for local data in new location, the apps that relied on Android WebView would not be able to access the old data.

Google admitted the fault in its bug report on Friday, adding that the issue is related to file migration from Chrome 78 to the latest version, Chrome 79. And given that Chrome update happens seamlessly and behind the background, the latest version might already have been automatically installed on millions of Android devices.

According to Google, an estimated 50 per cent of devices are already running the latest Chrome release. Though Google is aware of the bug, haven marked it as P0 (the highest priority-level), still no fix is within sight.

What can be done about the Chrome 79 bug?



As the update doesn't really delete the previous stored data, but only misplaces it, it can therefore be recovered with debug tools, although this is not easy for ordinary users, but for a developer.

But the two options as described by the Chromium team are:

1. Continue the migration, moving the missed files into their new locations.
2. Revert the change by moving migrated files to their old locations.

In the meantime, rolling back to the previous version, Chrome 78 would perhaps be the less stressful of the options, as it would still preserve saved data before the update to latest version, but at the expense of losing new local data saved since Chrome 79 was automatically installed on your device.

For those who haven’t updated to Chrome 79, it is recommended to wait until a permanent fix has been issued by Google and make sure your Android phone is unchecked from automatic update to prevent your device from auto-updating to the latest version.

Update: Google has fixed the Chrome 79 bug with the release of Chrome 79.0.3945.93, which is now rolling out to Android devices.

Google halts the roll-out of Chrome 79 as bug affects Android WebView (Updated)



Mozilla's recently released Firefox 71, comes with Native MP3 Decoding for Windows, MacOS and Linux; which followed the expiry of the patents for the long-outdated lossy compression MP3 audio file format.

While the native MP3 decoding is such a big deal, because Firefox browser would not have to rely on third-party software anymore to play MP3 contents, such as podcasts right on the browser. And free software projects that are based in countries where the software patents is an issue can include the codec, following Firefox example.

Also, there is a new re-designed about:config page where some of the old configuration options which made Firefox such a great browser can still be found; although most users never bothered to look at the old options, but Mozilla has revamped the layout of the about:config page, recoding it in HTML5.



The latest Firefox release also comes with lots of security-related fixes, including memory-related fixes, which involve bugs that showed evidence of corruption in the memory and exploitable to run arbitrary code.

Additionally, Firefox 71 offers a new "kiosk mode" for Windows which may be pretty useful to run Firefox pointed at captive intranet web portal on special-purpose device, which can be activated by launching the browser with a -kiosk option. Albeit, the -kiosk option does work also on the GNU/Linux version, but launches a rather useless Firefox full-screen.

And there is a rather catch up feature, Picture-in-Picture (PiP) mode which is already available on Google Chrome, allowing video to be replayed in “pop out” window that floats on top of the normal windows and can be adjusted on the screen. Firefox for Windows will now allow users to get support for the Picture in Picture mode with this latest release.

You can get the new features using either Ubuntu, Zorin OS, Peppermint, Linux Mint et al, by simply upgrading your Firefox browser through the system’s update manager tool or by downloading Firefox 71 directly from the Mozilla website.

Mozilla brings Native MP3 Decoding with Firefox 71 for Windows, MacOS and Linux



The search giants AI virtual assistant, Google Assistant is bundled into Android phones, also on other Google products like Google Home, making it easier to search by voice and converse with it seamlessly across the supported devices.

Google has over the years been trying to make the virtual assistant more capable of handling awkward conversations, by incorporating the deep neural networks of Google Search, and the knowledge base of Google Now, along with the advanced natural language recognition that’s evolving on the Android platform.

Now, the company has made it even possible to have voice conversations across different languages without barriers, nor need to download any additional app, with the updated Google Assistant able to support language translations in real time.

The new language translation capability is called Interpreter Mode in Google Assistant, and was earlier demonstrated at CES 2019, but until now was relegated to only smartspeakers and smart displays, such as the Google Home and Nest Hub Max.

The recent update has finally brought it to smartphones, including Android and iPhone. Albeit, there is no standalone app for you to download for Android, as Google Assistant is baked right into the operating system, while iPhone users will have to install the Google Assistant app.

How to use Google Assistant as Interpreter



For Android users, simply say, "Hey Google, be my [insert the language of choice here] translator," and bring your phone close to the person talking, and whatsoever the person is speaking will be translated to the specified language. It'll happen spontaneously as soon as the person begins talking.



And for iPhone, after downloading and installing the Google Assistant app from App Store, follow same guideline as above to activate the Interpreter mode, and place your iPhone near the speaker and the speech will be translated to your selected language. While currently, 44 languages are supported on Google Assistant Interpreter mode.

Google Assistant's translation is quite loud, however you can also opt to use the keyboard and show your phone's screen to the person if you're in a noisy environment.

Also, the system can automatically choose the language to translate based on your location, but you are allowed to configure this manually if you choose.

If you, perhaps want to translate text on objects, Google Lens, which is baked into Google Assistant on Android phones, currently supports that in real time, just point your Android phone's camera to the object to translate the text.

About Data Collection



According to Google, it is not different from how Google Assistant collects data; and your translations are transmitted over the cloud to the company, however Google claims that it doesn't share your personal data with third-parties.

But always bear in mind that Google Assistant could spontaneously start recording your conversation snippets and therefore, could potentially transmit sensitive and identifiable information. If you wish to monitor and delete your Google Assistant data, you can do so from your Google Account Activity dashboard.

How to use Google Assistant as your personal Interpreter



The decades old mobile technology standard, USSD remains relevant today, even more than it was back in the days. Anyone using a mobile phone can't escape dialing *123# to either check airtime balance or some other service prompt; but nowadays, USSD have gone beyond network providers' native services.

USSD, which stands for Unstructured Supplementary Service Data has got a mainstream appeal because it doesn't require data, nor airtime to access, as mobile operators typically use USSD for their native services such as airtime balance checks, top-ups, data services and promotional offers.

It’s perhaps the most basic mobile phone-based service – which potentially, have made some big strides in fintech services inclusion of the unbanked Nigeria, who are mostly living in the rural areas where network data services are abysmal or utterly unavailable.

Why the USSD Service was Introduced



As the mobile industry realized the need to facilitate easy machine-to-machine communications to take care of some basic tasks that requires the mobile device to query the network, the USSD protocol was born to supplement the existing GSM standards that mostly focused on person-to-person communications.

While network operators utilized STK (Sim Toolkit) technology in order to facilitate the reselling and subscription service offerings, the STK technology allows the operators to code a set of commands into their SIM cards which defines how the SIM cards interact with the device.



However, for STK to work, it's essential that the SIM card inserted in the mobile device is burned with the menu. Then, the application is normally protected by pin, either SIM card pin or phone lock pin and whenever the phone is locked or no sim inserted into the phone the service won't be possible.

Whereas, the USSD technology is network-deployed and not resident on the mobile device, the service is located in the mobile network and users who switched to the network will have access to the network's USSD menu. Whenever a user requests a menu, it initiates a real-time session between the USSD application platform and the mobile user once the service is invoked.

Mobile Apps vs USSD



Despite the predictions by several experts in financial services that app-powered mobile banking would be replacing USSD, it has remained the most widely adopted and successfully integrated mobile technology in emerging markets, including Nigeria.

Albeit, the USSD service has not necessarily rendered app-based services, such as banking redundant, as they’re quite different in their offerings, with different customers and market segments.

While some customers would prefer one, others might not. There are currently multiple variables that determines individual preferences, like data access, mobile device and educational background, plus of course, income. And certainly, not every customer have a smartphone and most devices don’t have data support or Wi-Fi connectivity in the rural areas.

In considerations to the factors, any financial service providers (FSPs) looking to enter the Nigerian market with mobile banking solutions should as a necessity incorporate USSD service, if they really want to succeed, as only USSD works for everyone.

Why Mobile Apps won't be replacing USSD in Nigeria anytime soon



Microsoft Teams is a collaboration platform that integrates with Office 365 subscription productivity suite, including Office and Skype, and supported extensions for third-party products.

While for the first time, Microsoft is bringing Office 365 application to the open source operating system, with the launch of Microsoft Teams as native Linux client, public preview.

The native Linux client has been the most requested Teams features on Microsoft’s forum, which plans to introduce it was announced at the company’s Ignite conference.



Before now, Linux users have been stuck with unofficial and different unsupported clients for Skype for Business; but with Microsoft Teams availability for Linux, it'll enable better collaboration experiences for the open source community, both at work and academic environments.

The native desktop app will feature all the core capabilities available in Microsoft Teams for Windows and MacOS versions, allowing developers who built apps for Teams to be able to extend their reach to new set of users.

Microsoft hopes to increase the appeal of Teams to developers by the introduction of the Linux desktop client, and for those having a commercial Office 365 subscription, they can try Microsoft Teams for Free, with information on technical requirements available in the official help page.

Finally, Microsoft Teams is now available as native Linux client



Apple has released iOS 13.3 for iPhone, with the long-awaited support for USB, NFC, and Lightning FIDO2-compliant security keys in Safari browser.

While FIDO2 is a WebAuthN browser API standard, along with the FIDO CTAP (Client to Authenticator Protocol), which builds on FIDO alliance for Universal Two Factor (U2F) authentication standard. WebAuthn is the W3C (World Wide Web Consortium) approved web authentication standard adopted by many tech industry leaders, whose API allows strong browser-to-hardware-based authentication for devices using security keys, NFC, and authenticators like TouchId.

The ability to use security keys with Safari browser is perhaps the biggest addition to iOS devices, which formerly, security keys weren’t supported on Apple’s Safari browser, except for third-party apps like 1Password. Now, the new OS makes the use of security keys such as 5Ci YubiKey possible.

Security keys represents the next level of online security, which is rather a move away from passwords that requires you to authenticate with two or more factors in a more robust way. It is easy to use and amply protects against advanced phishing attacks, whereby the hackers attempt to break into your account by having you enter your details on cloned website's log-in page.

The YubiKey 5Ci is perhaps the first iPhone compatible security key to feature USB-C with lightning connectors on a key, which makes it the best available option at the moment.

Apple's iOS 13.3 also brings other cool new features, such as improvements to the Screen Time parental controls feature, which allows you to set how your kids call, text or FaceTime, and even manage contacts with time-specific limits.

There are still some other smaller improvements with iOS 13.3, including the ability to create new videos from trimming clips in Photos, and also Apple News fresh layouts for News+ stories from sources like the Wall Street Journal and other leading newspapers, allowing you to like or dislike stories easily.

Apple's iOS 13.3 brings support for FIDO2-compliant Security Keys to iPhone



Linux systems have been reported to be susceptible to a flaw that could allow hackers to gain control of the machine through any nearby devices' WiFi signals.

The report was credited to Nico Waisman of GitHub Security Lab, who disclosed the new vulnerability which is affecting the version 3.10.1 of the Linux kernel released in 2013. It works by adding vendor-specific data elements to the WiFi beacons, which once received by any vulnerable device, will trigger the buffer overflow in the Linux kernel.

The vulnerability is marked as CVE-2019-17666 (rtl_p2p_noa_ie) and resides in drivers/net/wireless/realtek/rtlwifi/ps.c in the Linux kernel, through 5.3.6 lacks which is an upper-bound check, leading to buffer overflow. And it stems from the RTLWIFI driver which supports Realtek WiFi chips in several Linux systems, and the flaw is activated once the device is brought within the radio range of any malicious device.


Albeit, Waisman is still studying the exploitation and working to provide a proof-of-concept attack exploiting the flaw in ways that it can be executed on a vulnerable machine.

The developers have proposed a fix which will soon be released for the OS Kernel in a few days time, and the roll-out to various Linux distributions.

Linux Flaw allow hackers to easily Hijack Systems using WiFi signals



The end of the Year is around the corner, so it behooves us to scour the Web to bring to you the best technology predictions for 2020, even as the New Year promises a lot of surprises.

The major attention is definitely on the 5G roll-out globally, which beyond doubts, will herald a new era of Internet capabilities; while technology generally is evolving in an astronomical rate, making it even harder to decide which technology to deploy for the enterprise.

As the digital age reaches its climax, the bigger attention will be around what differentiates AI from humans, which is perhaps the biggest challenge for the emerging technology and its overall applications, that is poised to affect all aspects of our interactions and also how we relate to our immediate environments.

What are the best Marketing & Technology trends in 2020?



The Internet will undergo a tremendous change, which is expected to impact the way we work and may need to define what exactly is the boundary between humans and AI. With AI (artificial intelligence) finding new application in banking and e-commerce, IT administrators must ensure that the necessary security measures are put in place to help organisations to adapt to this rapidly changing world of technology.

We have selected this top 5 predictions from Gartner’s top 10 strategic predictions for 2020 and beyond.

1. AI Driven ADS & Marketing

AI (Artificial intelligence) Marketing involves the leveraging of customer data and AI capabilities, including machine learning to predict the customer's next actions and thus, help to improve the conversion of sales.

With the increase in adoption of biometric-tracking sensors and the artificial emotional intelligence, more businesses will be able to detect, not only web actions, but consumer's current emotions, which will be useful in knowing what will satisfy them, and increase the sales. Additionally, the environmental and behavioral indicators from biometrics will enable far deeper personalization of ads.

In fact, Gartner has predicted that AI identification of emotions will influence over half of online advertisements by 2024, but brands must be transparent about how they're collecting and using the consumers’ data.

2. Addiction to Online shopping

The increased in access to consumer data, will result to marketers be able to pinpoint exactly what consumers will buy, and the point in the buyer's journey a sales will be made. As this technology becomes more sophisticated, it will become uncanny in predicting what consumers actually want, the best price for products, and where to get the consumers best attention.

This in turn will lead to consumers purchasing more and more products, which they'd find difficult to control and can’t stop the purchase, and buyers addiction will eventually set in, which shopping addictions demands greater responsibility from governments and consumer groups, who must need to take actions against such exploitative practices.

3. Beyond BYOD

Almost all organisations will be expanding the BYOD policies with perhaps “bring your own enhancement” (BYOE) which move will help to address AI in the workplace.

And IT, will definitely encounter the temptation to assert more control as human augmentation technology becomes prevalent, though the real opportunity lies in exploiting the interest in BYOE. Even as the automotive industries already employ wearable for safety, the healthcare sectors also use the technology to maximize health and productivity.

These technologies will continue to evolve, and organisations will need to consider how the augmentations can be harnessed for personal and professional empowerment.

4. Digital tracking by “Internet of Behaviour”!

The Internet of Behaviour (IoB) is used to connect a person to digital actions, more like linking your image by facial recognition to activity such as buying a ticket, which can be digitally tracked.

IoB will discourage particular practices, and may also encourage some others. For instance, speeding, and unsafe driving or even lower insurance policies.

But, there are some quite concerns for ethical issues associated with the expanding of IoB to reward or punish certain practices with access (or the lack thereof) to the social services, like schools or housing.

5. Mobile Application of Cryptocurrency

With online marketplaces and social media platforms supporting cryptocurrency payments, many people will begin to clamor for transition to mobile-accessible cryptocurrency processing, and Africa is set for major adoption of the technology, where it is also expected to see the highest growth rates.

The cryptocurrency accounts will be the new driver of e-commerce as partners emerge from areas that are previously unable to access such capital markets.

All of these show how using AI will increase accessibility at work, which is perhaps one of Gartner’s strategic predictions for 2020. And the predictions are mainly on how technology is expected to transform what it means to be human, and IT leaders must be prepared to adapt into the changing environment.

Predictions for 2020: What are the best Marketing & Technology trends to Watch?



Apple's strict privacy stance, especially on the tracking of its iPhone users, has received a hit following the controversy about the iPhone 11 Pro tracking of users' location even when not given the necessary permissions.

While the latest versions of iOS gave users more granular control over sharing of the location data, with respect to how third-party apps can access the location data. And perhaps, given that Apple made it a lot clearer, and also applying the so-called "the carrot and stick" formula to improve the privacy of its users.

But, the finding credited to Brian Krebs, a security researcher, proves that the iPhone 11 pro "intermittently seeks the user's location information even when all applications and system services on the phone are individually set to never request this data."



The persistent tracking issue appears to be present only in the latest iPhone operating system (iOS 13.2.3) on iPhone 11 Pro devices, which oddity is somewhat related to the support for super-fast WiFi 6 routers, and may involve the introduction of some new hardware.

Apple claims the culprit is the Ultra-wideband technology which is an industry-standard that is still subject to international regulatory requirements; and that iOS require users' location to help determine if the device is in prohibited areas in order to have the ultra-wideband disabled so as to comply with the regulations.

Also, the U11 chip which is present in Apple's latest devices, necessary for short-distance, and high-bandwidth data transfer uses location data; it is also used for Apple's system for sharing items directly between Apple devices without navigating to messages, known as AirDrop. And presumably, it will also apply to Apple's "tile-tracking" feature which is expected to debut in the nearest future.

Apple, however maintains that "the management of ultra-wideband compliance and its use of location data is done entirely on the device, and that it is not collecting the user location data" which should be rather a bit comforting.

Apple admits the iPhone 11 Pro intermittent Location tracking issue

Microsoft is currently working on new ‘memory safe’ programming language, which is internally referred to as “Safe Infrastructure Programming” based on Rust language.

While the company have been experimenting with the Rust language to improve its software, under Project Verona initiative, as Rust programming language is far better than the C/C++ languages commonly used to write micro-controller firmware.

According to Microsoft, for C++ developers who code complex systems, using Rust is a breath of fresh air. And the memory safety guarantees the compiler give, assures the developer much greater confidence that compiling code will be beyond memory safety vulnerabilities.

What is Rust & Why the Programming language?



Rust is a programming language that is focused on safety, though similar to the C++ language, but it provides better memory safety while ensuring higher performance.

Rust was designed at Mozilla Research by Graydon Hoare, with contributors including Brendan Eich, and Dave Herman, among others. The language was refined by the writing of Servo layout, which is a browser engine and the Rust compiler, a free and open-source software licensed under Apache License 2.0 and the MIT License.

just like C/C++, Rust has an optional “runtime” that is minimal, however the difference stems from its strong safety guarantees. Except, of course, you'd explicitly opt-out through usage of the “unsafe” keyword, Rust is absolutely memory safe. The safety guarantees from Rust is obtained by strict guidelines placed around the use of the unsafe keyword.

Memory safety issues in the C/C++ Languages



The term "Memory safety" refers to coding frameworks that protect the memory space from being taken-over by malware. But C and C++ languages are extremely good at writing low-level systems, and require very little resources on the machine, but they are very unsafe and, when developed, safety was not taken into considerations.

That is the more reason Microsoft has started experimenting with the Rust language in the attempt at minimizing bugs in their software, which will ultimately lead to memory safety in the related vulnerabilities.

Albeit, the major obstacle in achieving the goal is perhaps the fact that it isn't possible to rewrite all the software from scratch using Rust. Microsoft is, however attempting to make Rust co-exist with other languages, which unfortunately, can't guarantee complete safety.

Microsoft's attempt to create Safe Infrastructure Programming with Rust Language



The malicious activities connected with the Avast and AVG browser extensions has resulted the removal from Firefox Add-on store until the concerned companies are able to resolve the issue.

According to security researcher, Wladimir Palant, the extensions send a large amount of personal data about the browsing habits of users, which are far beyond what's supposed to be necessary for the extensions to function.

While the extensions were designed to warn the users on visit to malicious or phishing websites; with Avast and AVG, including subsidiary programs like Avast SafePrice, and AVG SafePrice, whereby the SafePrice extensions are to help online shoppers to know best offers, through price comparisons, and discount coupons available from various websites.



The stealthy nature of the software is such that downloading & installing any one of these extensions on your web browser, will automatically install the respective subsidiary add-on on the user's browser.

Personal Data Collected by the Add-ons



  • Browsing history
  • Unique User Identifier (UID) for tracking
  • Browser version and number
  • Operating system and version number
  • Location data


How the Software Uses the collected Data



The tracking and window identifiers allow Avast to create a precise reconstruction of users' browsing activities. And also, the number of tabs opened by the user, the visited websites and time spent on the site, along with what was clicked and when you switched to a different tab.

It is pertinent to note that all are connected to a number of attributes, which allows Avast to recognize you accurately and reliably, even the UID.

The issue also affects Google Chrome, albeit Mozilla was quick to take action by temporarily removing the add-ons from Firefox extensions store, but Google is yet to remove the extensions from the Chrome Web Store.

Mozilla removes Avast and AVG Browser Extensions for Spying On Firefox Users



Before now, to transfer your media files from the Facebook platform, you are required to "Download Your Information" first, which then enable you to access a secure copy of the data shared with Facebook.

Then last year, Facebook announced the Data Transfer Project, a collaboration with Google, Apple, Microsoft, and Twitter to build a common way for people to transfer data between the different services. Now, the company is testing a new feature that will allow users to transfer their photos and videos to Google Photos directly without the need to download and re-upload it.

Facebook had developed the feature using the Data Transfer Project (DTP), which is a universal protocol for data import/export that aims to give web users more control over their personal data and also allow them to quickly move it between any online services or apps they so desire.

Why is the Data Transfer Project (DTP) necessary?



The Data Transfer Project was meant to create an open source, service-to-service data portability system that allow individuals across different web platforms to easily move their data between the different online services. The project provides an open source library that these services can use to manage the direct transfers on behalf of users of the web services.



Instead of expecting web companies to build their own system from scratch, the Data Transfer Project open source framework allows them to share improvements as well as data models.

That is, if a company is using the Data Transfer Project framework, they can send existing data type to a new service by simply creating new Data Transfer Project importer for the data type. And the new importer also can contribute back to the open source project, allowing the other companies to export to the new service, with no additional technical work.

Any Security & Reliability Issues?



The system only request for the permissions required for task at hand, while access to the destination service ends when the transfer is completed. And the transfers are only initiated by the owner of the account, which requires verification, by Facebook requiring the individual to reenter password before initiating any transfer.

Also, Facebook sends an email to the registered account if a transfer is requested, which will allow them a means to stop the transfer should they change their mind or did not initiate the request.

The feature is currently been tested among select Facebook users in Ireland, UK, which for now only allow the transfer of files to Google Photos. Meanwhile, the company is expected to add support for more web services and data types in the near future.

Facebook Testing Tool to Allow Users Transfer Photos/Videos to Google Photos



The move to replace SMS with RCS messaging standard by leading mobile carriers may have hit a brick wall, as its making mobile users vulnerable to call interception, text-based attacks, location tracking, and other security threats, according to security researchers.

While the RCS standard was officially adopted by GSMA in 2008 with a Steering Committee established, the GSMA later then entered into partnership with Google and 15 other global carriers to push the adoption of the Rich Communication Services (RCS). Now, the leading mobile carriers are working with interest groups and other connected mobile companies to deploy the new messaging standard in text messaging app for Android phones.

Though the RCS standard isn't inherently flawed, but the fact that network carriers are implementing RCS on a big scale exposes mobile users to several security threats.

According to researchers at SLabs, there are flaws in how the telecoms forward the RCS configuration files to Android devices, which flaws stems from the exaction of the configuration file by identifying the IP addresses, as such any app is capable of requesting for the file, with or without permission, using the IP address.

In other words, what that means is that such apps can easily get the username and password for all your messages and voice calls.

Besides this, there are security lapses in the authentication process, the telecom simply sends a unique authentication code to verify the identity of the user, but since the carrier allows an “unlimited number of tries” that can enable hackers to bypass the authentication with several attempts.

Also, the RCS-based messages are not end-to-end encrypted, and of course Apple, one of the leading mobile player, have shown no interest in RCS as it already has more than the technology has promised with the iMessage. Therefore, how to get the standard to be compatible with the iPhone remains an issue.

These points may have hindered the general adoption of the standard, coupled with mobile carrier and phone maker's complicated policies, and the fact that service providers are offering to implement non-universal specifications for RCS standard, which limits the RCS-based messages only to subscribers of the networks.

RCS Messaging Standard adoption may suffer due to several factors



Microsoft's Windows 10 updates naming format usually follows the sequence of numbers representing year (yy) and the month (mm) of the release - as in 2020's first-half release should translate to 20H1; but that has changed with the latest beta build of the upcoming upgrade.

The next Windows 10 refresh has been tagged with the four-digit label of 2004, which according to Microsoft, the designation is supposed to help avoid confusion with the rather erstwhile Windows server 2003 in admission of the name change in the upgrade's release timeline.

While Microsoft released Windows Server 2003 in April 2003, which was eventually retired in July 2015, and superseded by the Windows Server 2008.

Though, Microsoft's naming of Windows 10 upgrades has been rather irregular in format, like Windows 10 1607, for instance debuted August 2, 2016, and not July as the name indicated. The upgrades Windows 10 1703 and 1803 debuted in April, while the 1903 shipped in May. As such, no guarantee that Windows 10 2004 will eventually go live next April.

It's still unclear when Microsoft will begin the beta testing, but if the company should follow the path of the year, it may likely skip 2020's upgrade for fall (2009 or 20H2) and, just like 2019, Insider participants will get very early versions of the 2021 spring upgrade (2103 or 21H1).

Albeit, Microsoft's code names are rather confusing, and the major-minor model of 2019 would be as a one-off, while those of 2020 would return to normalcy, at least if Microsoft's evolving service policies can be a bit normal.

Microsoft has already released Windows 10 Insider Preview Build 19033 (20H1) for Windows Insiders in both the Fast and Slow ring. But has declined to say which exactly will begin testing for early 2020, since the recently released Windows 10 1909 was a minor upgrade.

Microsoft's next Windows 10 Update designated with 2004 four-digit label



Google has identified several state-sponsored hacking attempts on its users, which as reported by the company's Threat Analysis Group (TAG), over 90 percent of users targeted were in form of "credential phishing emails" which is aimed at tricking the victims to gain access to their account.

The TAG initiative tracks more than 270 hacking groups believed to be government-backed, from about 50 countries which are involved in perceived intelligence gathering, and stealing of intellectual property, which cyber attacks are targeted at journalists, dissidents, and activists, or for spreading misinformation.

Google as a preventive measure on the evolving trend of digital warfare by nation-states engaging in cyber attacks against organizations and individuals with vested interests and concerns have moved to launch TAG, a notification system aimed at alerting users whose accounts are most likely at risk of such government-backed attacks.

With the era of nation-states engaging in digital warfare upon us, another report on Russian military backed hacking group named Sandworm, launching some of the most deadly cyber-attacks in history, that first surfaced in 2014 with a cyber-attack on the Ukrainian electric grid .

Sandworm, in 2016 struck the Ukrainian power-grid again, which was a disruptive attack on an enemy in what was seen as been in the midst of a kinetic war. But thanks to some error in configuration of Sandworm’s malware, the attack that could have been more devastating, had only a lesser effect.

If the malware had been able to wrought its full programmed commands, It could have completely burned down the power lines or even blown-up transformers, the sort of destructive that could only be imagined.

Google will constantly lookout for such malicious activities on their systems, and if any instance of such activities are detected or any attempt to login to users account unauthorized, it will alert the user by showing warning notifications. Albeit, the warnings doesn't mean your account is actually compromised, rather it is meant to alert you that you are a target of state-sponsored cyber-attacks.

Therefore, it is recommended that such high-risk users who are targeted should take some measures to help protect their accounts, which include keeping software up-to-date and using of Authenticator app or Security Key, which is the best security defense according to Google.

Google's Threat Analysis Group (TAG) warns about State-sponsored Cyber-attacks


Google had earlier announced that it will bring the voice assistant to G Suite in April, now the company is rolling out the capabilities to allow users manage their Calendar schedules using voice commands.

While Google has long touted its AI capabilities, and what it means for businesses, with the adding of the machine-learning abilities to G Suite platform, it will afford users access to Google Assistant via G Suite apps, along with the Smart Compose auto-suggestions for Google Docs.

Other possibilities include the ability to send emails to all participants for a meeting, with voice and video calls using Google’s Meet app, and interacting with the Google Meet hardware in conference rooms, both to join and exit meetings.

Google is harping on its extensive suite of enterprise applications to push for the use of voice technologies in the workplace, even as Gartner had predicted that about 25% of digital workers will be interacting with virtual assistants on daily basis in 2021. With Google Assistant popularity, Google is very well positioned to rule the workplace, coupled with its robust security features on Google Cloud, which significantly places it ahead of the competitors.



The company also hinted on bringing Smart Compose feature to Google Docs, which feature is already live in Gmail, it uses AI to suggest common phrases as users draft documents which reduces unnecessary repetition allowing for faster writing, and also suggest possible corrections for misspelled words.

Google’s AI tool is able to recognize commonly used words that are specific to an organization, which will help to avoid erroneously marking them as incorrect.

The only challenge, however is how Google can reassure enterprise customers on privacy and that all essential functionalities are in place, covering every industry across all regions.

Google’s AI Assistant now accessible within its G Suite productivity tools



MTN Nigeria has commenced its 5G network trial, making the country the second to do so in Africa, which follows after a successful launch of C-band 5G trial in South Africa, earlier in the month.

While MTN will be leveraging on Huawei technology for 5G network trial in Abuja, that of Lagos and Calabar which is scheduled to come next will be powered by Ericson and ZTE respectively. MTN had secured a new 100 MHz spectrum for the trial in Calabar and a 20 MHz for Abuja, which may result a significant difference in the 5G Internet speeds for Calabar and Abuja.

But unlike optical fiber that require cables, the fifth Generation of Cellular Network Technology, 5G basically runs on cloud-based architecture.

What's the Speed difference between 4G and 5G Cellular Technology?



4G is the currently most dominant network technology in the world, but that won't be for long, as many nations are already deploying the 5G technology at astronomical rate, and things are about to change. 4G brought a lot of promising emerging technology to limelight, like the Internet of Things (IoT) which is now a real possibility and also allowing the ability to manage huge number of connections on the network.

Apparently, 4G mobile internet speeds were up to 500 times faster than the earlier technology, 3G and amply supported HD TV on mobile, with high–quality video calls, and fast browsing and streaming experiences for mobile users.



However, 5G is far more faster, and more efficient than 4G. It guarantees mobile data speed that's far ahead of the fastest broadband network currently available, with speed of up to 100 gigabits per second. In fact, 5G promises to be 100 times faster than 4G, during a comparison test by MTN Nigeria on both 4G and 5G networks, it took only 15 seconds to download 2GB video on the 5G network. But same file took about 5 minutes to download on 4G, exactly 3.84 seconds.

The download speed on the MTN 5G network was about 1 Gbps, but could go as high as 4.1 Gbps. That of the 4G network was about 68.2 Mbps, which is just a fraction compared to 5G network speed. It should also be noted that the 4G network is on the 2600 MHz band, against the 5G on 26 GHz.

Does Nigeria have 5G-enabled devices, yet?



No doubt, the country is witnessing an unprecedented level of mobile proliferation, but within this huge chunk of smartphones are the majority of sub-standard Chinese phones which cannot even boast of having 4G chipset, let alone 5G.

This has aggravated the worry about the level of penetration of devices that are 5G-ready, however there is some good news for customer premises equipment (CPE), whereby Nigerians can experience the 5G network right in their homes, and on any device, even though the device isn't 5G-enabled, as what’s only required is a device with WiFi connectivity.

In fact, it is projected that there will be over 20 billion connected devices by 2020, all of which devices will require connections of greater capabilities. Will Nigeria be ready, then?

5G Trial: Is Nigeria ready for the Next Generation of Cellular Technology?



FIDO2 is made up of WebAuthN browser API standard and the FIDO CTAP (Client to Authenticator Protocol), which builds on the previous project by the FIDO alliance for Universal Two Factor (U2F) authentication standard.

While WebAuthn is the World Wide Web Consortium (W3C) approved web authentication standard which has been adopted by many other tech industry leaders, with the WebAuthn API allowing strong browser-to-hardware-based authentication by devices such as security keys, mobile phones (NFC), and built-in authenticators like TouchId.

So, in order to deliver a more secure experience for users, Twitter is offering a set of two-factor authentication methods to help improve the security of the accounts on the platform.



Twitter 2FA added with the options of security keys will stand out as one of the strongest authentication method to thwart phishing and low friction resistant capabilities. Albeit, Twitter already support security key-based 2FA for about a year now, but the prevailing FIDO U2F standard supported only limited number of authenticators and browsers, restricting widespread adoption.

The Client to authenticator protocol (CTAP) enables FIDO2-capable devices to interface with external/roaming authenticators over USB, Bluetooth, or Near field communication (NFC). It enforces secure device-to-device authentication channel, with the communication typically between a user owned cryptographic roaming authenticator, such as smartphone or hardware security key, and a client platform like a laptop.

Twitter with the update to 2FA, seek to offer an upgrade and secure authentication standard for security key, to support more browsers and authenticators of the future.

The WebAuthn follows the same process as when registering security key, and it's enabled by default, with support only for the physical security key authenticators with WebAuthn, but Twitter has promised to add support for more options in the future.

Twitter adds support for FIDO2 to enhance device based 2FA on the platform



Mozilla has released a list of connected devices that it deems “Privacy Not Included” gadgets based on creepiness and their lacking of basic security standards.

While the report is supposed to forewarn users on what not to buy in the holiday shopping, it also help to identify which connected devices and gadgets are trustworthy and protect the privacy of their customers.

According to Mozilla, most of the smart home devices from Google, Amazon and Facebook posed some privacy risks, with the likes of Amazon Ring, haven been compromised in the past, and it's capable of eavesdropping using GPS data, also it remains unclear if users can delete data that are accumulated on the device.



Mozilla also cited similar issue with the Amazon Ring Indoor Cam, noting that stored customer data such as video recordings are not encrypted on Amazon cloud server and access could be gained on any of these data by its employees. Along with some other Ring products, the security cam was dinged by Mozilla for creepy policies around privacy and as Amazon aren't as transparent as they would like them to be about their data retention and deletion policies.

In the reports, Mozilla reviewed total of 76 gadgets which are available for purchase across six main categories: Smart Home, Toys & Games, Entertainment, Health & Exercise, Wearables and Pets.

The new cutting-edge technology of wearables: functional smart devices that can track your steps, monitor heart rate, or even help you to manage pain. But, it’s very important to note that these devices also collect lots of data about you and your daily activities.



Mozilla indicted the Apple Airpods, stating that "they know when they're in your ear, when you take them out, when you're talking--Hey Siri!--and when you're listening. They connect to your iPhone and your Apple Watch at the same time, so you can...get confused about which device is playing? These earbuds sound almost too smart for their own good". Either they are or really magical as Apple claims.

It scores the devices using interactive tool that allow shoppers to rate the creepiness of the products using emoji sliding scale ranging from Not Creepy to Super Creepy.

Albeit, Google Home passed Mozilla’s security standards, it was still rated “super creepy” by users because of the fact that Google target users data through search history, location and many more sources. Also, Google’s Nest Hub Max was dubbed as “very creepy” which is less that what the Google Home was rated.

Meanwhile, the Nintendo Switch and Sonos One SL were rated as “not creepy” as they passed Mozilla’s privacy standards, though the three Apple products had some privacy concerns that could allow someone to spy on users via the microphone, camera, and location tracking.

Mozilla releases #PrivacyNotIncluded List of Smart Speakers and Wireless devices