eSpeak Formant Synthesizer

Last update : November 2, 2014

eSpeak

eSpeak is a compact multi-platform multi-language open source speech synthesizer using a formant synthesis method.

eSpeak is derived from the “Speak” speech synthesizer for British English for Acorn Risc OS computers, developed by Jonathan Duddington in 1995. He is still the author of the current eSpeak version 1.48.12 released on November 1, 2014. The sources are available on Sourceforge.

eSpeak provides two methods of formant synthesis : the original eSpeak synthesizer and a Klatt synthesizer. It can also be used as a front end for MBROLA diphone voices. eSpeak can be used as a command-line program or as a shared library. On Windows, a SAPI5 version is also installed. eSpeak supports SSML (Speech Synthesis Marking Language) and uses an ASCII representation of phoneme names which is loosely based on the Kirshenbaum system.

In formant synthesis, voiced speech (vowels and sonorant consonants) is created by using formants. Unvoiced consonants are created by using pre-recorded sounds. Voiced consonants are created as a mixture of a formant-based voiced sound in combination with a pre-recorded unvoiced sound. The eSpeakEditor allows to generate formant files for individual vowels and voiced consonants, based on a sequence of keyframes which define how the formant peaks (peaks in the frequency spectrum) vary during the sound. A sequence of formant frames can be created with a modified version of Praat, a free scientific computer software package for the analysis of speech in phonetics. The Praat formant frames, saved in a spectrum.dat file, can be converted to formant keyframes with eSpeakEdit.

To use eSpeak on the command line, type

espeak "Hello world"

There are plenty of command line options available, for instance to load from file, to adjust the volume, the pitch, the speed or the gaps between words, to select a voice or a language, etc.

To use the MBROLA voices in the Windows SAPI5 GUI or at the command line, they have to be installed during the setup of the program. It’s possible to rerun the setup to add additional voices. To list the available voices type

espeak --voices

eSpeak uses a master phoneme file containing the utility phonemes, the consonants and a schwa. The file is named phonemes (without extension) and located in the espeak/phsource program folder. The vowels are defined in the language specific phoneme files in text format. These files can also redefine consonants if you wish. The language specific phoneme text-files are located in the same espeak/phsource folder and must be referenced in the phonemes master file (see example for luxembourgish).

....
phonemetable lb base
include ph_luxembourgish

In addition to the specific phoneme file ph_luxembourgish (without extension), the following files are requested to add a new language, e.g. luxembourgish :

lb file (without extension) in the folder espeak/espeak-data/voices : a text file which in its simplest form contains only 2 lines :

name luxembourgish
language lb

lb_rules file (without extension) in the folder espeak/dictsource : a text file which contains the spelling-to-phoneme translation rules.

lb_list file (without extension) in the folder espeak/dictsource : a text file which contains pronunciations for special words (numbers, symbols, names, …).

The eSpeakEditor (espeakedit.exe) allows to compile the lb_ files into an lb_dict file (without extension) in the folder espeak/espeak-data and to add the new phonemes into the files phontab, phonindex and phondata in the same folder. These compiled files are used by eSpeak for the speech synthesis. The file phondata-manifest lists the type of data that has been compiled into the phondata file. The files dict_log and dict_phonemes provide informations about the phonemes used in the lb_rules and lb_dict files.

eSpeak applies tunes to model intonations depending on punctuation (questions, statements, attitudes, interaction). The tunes (s.. = full-stop, c.. = comma, q.. = question, e.. = exclamation) used for a language can be specified by using a tunes statement in the voice file.

tunes s1  c1  q1a  e1

The named tunes are defined in the text file espeak/phsource/intonation (without extension) and must be compiled for use by eSpeak with the espeakedit.exe program (menu : Compile intonation data).

meSpeak.js

Three years ago, Matthew Temple ported the eSpeak program from C++ to JavaScript using Emscripten : speak.js. Based on this Javascript project, Norbert Landsteiner from Austria created the meSpeak.js text-to-speech web library. The latest version is 1.9.6 released in February 2014.

meSpeak.js is supported by most browsers. It introduces loadable voice modules. The typical usage of the meSpeak.js library is shown below :

<!DOCTYPE html>
<html lang="en">
<head>
 <title>Bonjour le monde</title>
 <script type="text/javascript" src="mespeak.js"></script>
 <script type="text/javascript">
 meSpeak.loadConfig("mespeak_config.json");
 meSpeak.loadVoice("voices/fr.json");
 function speakIt() {
 meSpeak.speak("Bonjour le monde");
 }
 </script>
</head>
<body>
<h1>Try meSpeak.js</h1>
<button onclick="speakIt();">Speak It</button>
</body>
</html>

Click here to test this example.

The mespeak_config.json file contains the data of the phontab, phonindex, phondata and intonations files and the default configuration values (amplitude, pitch, …). This data is encoded as base64 octed stream. The voice.json file includes the id of the voice, the dictionary used and the corresponding binary data (base64 encoded) of these two files. There are various desktop or online Base64 Decoders and Encoders available on the net to create the required .json files (base64decode.org, motobit.com, activexdev.com, …).

meSpeak cam mix multiple parts (diiferent languages or voices) in a single utterance.meSpeak supports the Web Audio API (AudioContext) with internal wav files, Flash is used as a fallback.

Links

A list with links to websites providing additional informations about eSpeak and meSpeak follows :

Language : fr, de, en, lb, eo

Last update : November 7, 2021

Language is the human capacity for acquiring and using complex systems of communication, and a language is any specific example of such a system. The scientific study of language is called linguistics.

In the context of a text-to-speech (TTS) and automatic-speech-recognition (ASR) project, I assembled the following informations about the french, german, english, luxembourgish and esperanto languages.

French

French is a romance language spoken worldwide by 340 million people. The written french uses the 26 letters of the latin script, four diacritics appearing on vowels (circumflex accent, acute accent, grave accent, diaeresis) and the cedilla appearing in ç. There are two ligatures, œ and æ. The french language is regulated by the Académie française. The language codes are fr (ISO 639-1), fre, fra (ISO 639-2) and fra (ISO 639-3).

The spoken french language distinguishes 26 vowels, plus 8 for Quebec french. There are 23 consonants. The Grand Robert lists about 100.000 french words.

German

German is a West Germanic language spoken by 120 million people. In addition to the 26 standard latin letters, German has three vowels with Umlauts and the letter ß called Eszett. German is the most widely spoken native language in the European Union. The german language is regulated by the Rat für deutsche Rechtschreibung. The language codes are de (ISO 639-1), ger, deu (ISO 639-2) and 22 variants in ISO 630-3.

The spoken german language uses 29 vowels and 27 consonants. The 2013 relase of the Duden lists about 140.000 german words.

English

English is a West Germanic language spoken by more than a billion people. It is an official language of almost 60 sovereign states and the third-most-common native language in the world. The written english uses the 26 letters of the latin script, with rare optional ligatures in words derived from Latin or Greek. There is no regulatory body for the english language. The language codes are en (ISO 639-1) and eng (ISO 630-2 and ISO 639-3).

The spoken english language distinguishes 25 vowels and 34 consonants, including the variants used in the United Kingdom and the United States. The Oxford English Dictionary lists more than 250,000 distinct words, not including many technical, scientific, and slang terms.

Luxembourgish

Luxembourgish (Lëtzebuergesch) is a Moselle Franconian variety of West Central German that is spoken mainly in Luxembourg by about 400.000 native people. The Luxembourgish alphabet consists of the 26 Latin letters plus three letters with diacritics: é, ä, and ë. In loanwords from French and German, the original diacritics are usually preserved. The luxembourgish language is regulated by the Conseil Permanent de la Langue Luxembourgeoise (CPLL). The language codes are lb (ISO 639-1) and ltz (ISO 630-2 and ISO 639-3).

The spoken luxembourgish language uses 22 vowels (14 monophthongs, 8 diphthongs) and 26 consonants. The luxembourgish-french dictionary dico.lu icludes about 50.000 words, the luxembourgish-german dictionary luxdico lists about 26.000 words. The full online Luxembourgish dictionary www.lod.lu is in construction, at present words beginning with A-S may be accessed via the search engine.

Esperanto

Esperanto is a constructed international auxiliary language. Between 100,000 and 2,000,000 people worldwide fluently or actively speak Esperanto. Esperanto was recognized by UNESCO in 1954 and Google Translate added it in 2012 as its 64th language. The 28 letter Esperanto alphabet is based on the Latin script, using a one-sound-one-letter principle. It includes six letters with diacritics: ĉ, ĝ, ĥ, ĵ, ŝ (with circumflex), and ŭ (with breve). The alphabet does not include the letters q, w, x, or y, which are only used when writing unassimilated foreign terms or proper names. The language is regulated by the Akademio de Esperanto. The language codes are eo (ISO 639-1) and epo (ISO 630-2 and ISO 639-3).

Esperanto has 5 vowels, 23 consonants and 2 semivowels that combine with the vowels to form 6 diphthongs. The core vocabulary of Esperanto contains 900 roots which can be expanded into tens of thousands of words using prefixes, suffixes, and compounding.

Links

A list with links to websites with additional informations about the five languages (mainly luxembourgish) is shown hereafter :

Phonemes, phones, graphemes and visemes

Phonemes

A phoneme is the smallest structural unit that distinguishes meaning in a language, studied in phonology (a branch of linguistics concerned with the systematic organization of sounds in languages). Linguistics is the scientific study of language. Phonemes are not the physical segments themselves, but are cognitive abstractions or categorizations of them. They are abstract, idealised sounds that are never pronounced and never heard. Phonemes are combined with other phonemes to form meaningful units such as words or morphemes.

A morpheme is the smallest meaningful (grammatical) unit in a language. A morpheme is not identical to a word, and the principal difference between the two is that a morpheme may or may not stand alone, whereas a word, by definition, is freestanding. The field of study dedicated to morphemes is called morphology.

Phones

Concrete speech sounds can be regarded as the realisation of phonemes by individual speakers, and are referred to as phones. A phone is a unit of speech sound in phonetics (another branch of linguistics that comprises the study of the sounds of human speech).  Phones are represented with phonetic symbols. The IPA (International Phonetic Alphabet) is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was created by the International Phonetic Association as a standardized representation of the sounds of oral language.

In IPA transcription phones are conventionally placed between square brackets and phonemes are placed between slashes.

English Word : make
Phonetics : [meik]
Phonology : /me:k/   /maik/   /mei?/

A set of multiple possible phones, used to pronounce a single phoneme, is called an allophone in phonology.

Graphemes

Analogous to the phonemes of spoken languages, the smallest semantically distinguishing unit in a written language is called a grapheme. Graphemes include alphabetic letters, typographic ligatures, chinese characters, numerical digits, punctuation marks, and other individual symbols of any of the world’s writing systems.

Grapheme examples

Grapheme examples

In transcription graphemes are usually notated within angle brackets.

<a>  <W>  <5>  <i>  <> <>  <ق>

A grapheme is an abstract concept, it is represented by a specific shape in a specific typeface called a glyph. Different glyphs representing the same grapheme are called allographs.

In an ideal phonemic orthography, there would be a complete one-to-one correspondence between the graphemes and the phonemes of the language. English is highly non-phonemic, whereas Finnish come much closer to being consistent phonemic.

Visemes

A viseme is a generic facial shape that can be used to describe a particular sound. Visemes are for lipreaders, what phonemes are for listeners: the smallest standardized building blocks of words. However visemes and phonemes do not share a one-to-one correspondence.

Visemes

Visemes

Links

A list with links to websites with additional informations about phonemes, phones, graphemes and visemes is shown hereafter :

Picture element and srcset attribute

Last update : July 5, 2014

Jason Grigsby outlined two years ago that there are two separate, but related requirements that need to be addressed regarding the use of the <img> element in responsive designs :

  1. enable authors to provide different resolutions of images based on different environmental conditions
  2. enable authors to display different images under different conditions based on art direction

Resolution Switching

When we handle an image for Retina displays, it makes sense to deliver a crispy, high resolution picture to the browser. When we send the same image to a mobile with a small screen or to a tablet connected with low speed, it’s efficient to save bandwidth and to reduce the loading and processing time by providing a small size picture.

In HTML, a browser’s environmental conditions are primarily expressed as CSS media features (orientation, max-width, pixel-density, …) and CSS media types (screen, print, …). Most media features are dynamic (a browser window is resized, a device is rotated, …). Thus a browser constantly responds to events that change the properties of the media features. Swapping images provides a mean to continue communication effectively as the media features change dynamically.

Art Direction

When we display an image about a subject (i.e. the brain structure) at a large size, it makes sense to show the context. When we display the same image on a small screen, it’s useful to crop it and to focus on a detail. This differentiation is ruled by art direction.

art

for small screens a detail looks better than the resized original (Wikipedia) picture

Breakpoints

The @media query inside CSS or the media tag inside the link element are the key ingredients for responsive design. There are several tactics for deciding where to put breakpoints (tweak points, optimization points). As there are no common screen sizes, it doesn’t make sense to base the breakpoints on a particular screen size. A better idea is to look at the classic readability theory and to break the layout if the width of a column exceeds 75 characters or 10 words. These are the breakpoints. Vasilis van Gemert created a simple sliding tool to show the impact of language and font family on the text width.

Lucky responsive techniques

In the recent past, web developers relied on various techniques (CSS background images, Javascript libraries, semantically neutral elements, <base> tag switching, …) to use responsive images in their applications.  All of these techniques have significant limits and disadvantages (bypassing the browser’s preload scan, redundant HTTP requests, complexity, high processing time, …).  For all these reasons a standardized solution was wanted.

Possible responsive image solutions

The proposed solutions to deal with one or with both of the requirements for responsive images (“resolution switching” and “art-direction“) are the following :

  • <picture> element : addresses requirement #2 (the author selects the image series and specifies the display rules for the browser)
  • srcset and sizes attributes : addresses requirement #1 (the browser selects the image resolution based on informations provided by the author)
  • CSS4 image-set : addresses requirement #1 (the browser selects the images based on informations provided by the author)
  • HTTP2 client hints : addresses requirements #1 and #2 (the server select the images based on rules specified by the author)
  • new image format : addresses requirement #1 (there is only one image)

Responsive image standardization

On June 20, 2014, Anselm Hannemann, a freelance front-end developer from Germany, announced on his blog that the <picture> element and the attributes srcset and sizes are now web standards. The discussions and debates about the specification of a native responsive images solution in HTML lasted more than 3 years inside WHATWG, RICG and W3C.

The Responsive Images Community Group (RICG) is a group of developers working towards a client-side solution for delivering alternate image data based on device capabilities to prevent wasted bandwidth and optimize display for both screen and print. RICG is a community group of the World Wide Web Consortium (W3C). The group is chaired by Mathew Marquis of the Filament Group and has 362 participants, among them the Responsive Web Design pioneers Nicolas Gallagher, Bruce Lawson, Jason Grigsby, Scott Jehl, Matt Wilcox and Anselm Hannemann.

The RICG drafted a picture specification (editors draft July 1, 2014) with the new HTML5 <picture> element and the srcset and sizes attributes that extends the img and source elements to allow authors to declaratively control or give hints to the user agent about which image resource to use, based on the screen pixel density, viewport size, image format, and other factors.

Bruce Lawson was the first to propose the <picture> element and he has a degree of attachment to it. The srcset attribute was presented on the WHATWG mailing list by someone from Apple. At first, the majority of developers favored the <picture> element and the majority of implementors favored the srcset attribute. The W3C states how the priority should be given when determining standards:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

Both WHATWG and W3C included now the <picture> element and the srcset and sizes attributes to the HTML5 specification. The links are given below :

The <picture> element

The use of the <picture> element is shown in the following code examples :

<picture>
 <source srcset="brain-mobile.jpg, brain-mobile-x.jpg 2x">
 <source media="(min-width: 480px)" srcset="brain-tablet.jpg, 
    brain-tablet-hd.jpg 2x">
 <source media="(min-width: 1024px)" srcset="brain-desktop.jpg, 
    brain-desktop-hd.jpg 2x">
 <img src="brain.jpg" alt="Brain Structure">
</picture>

With a mobile-first approach, the image “brain-mobile.jpg” is rendered by default, the image “brain-tablet.jpg” is rendered if the user screen is at least 480px wide, and “brain-desktop.jpg” is rendered if the user screen is at least 1024px wide. The image “brain.jpg” is destinated to those browsers who don’t understand the <picture> element. The second URL in the srcset attribute is paired with the string 2x, separated by a space, that targets users with a high-resolution display (like the Retina with a pixel density 2x).

<picture>
<source sizes="100%" srcset="brain-mobile.jpg 480w, 
brain-tablet.jpg 768w, brain-desktop.png 1024w">
<img src="brain.jpg" alt="Brain Structure">
</picture>

In the second example the sizes attribute is used to let the image cover all the width of the device (100%), regardless of its actual size and pixel density. The browser will automatically calculate the effective pixel density of the image and choose which one to download accordingly.

The four images brain-mobile.jpg, brain.jpg, brain-tablet.jpg and brain-desktop.jpg not only have different dimensions, but may  also have different content. This way authors are enabled to display different images under different conditions, based on art direction.

The <picture> element should not be confused with the HTML5 <figure> element which represents some flow content. The <figure> element is able to have a caption, typically the <figcaption> element.

<figure>
   <figcaption>Brain Structure</figcaption> 
   <img src="brain.jpg" alt="Brain Structure" width="320"/>
</figure>

The sizes syntax is used to define the size of the image across a number of breakpoints. srcset then defines an array of images and their inherent sizes.

The srcset attribute

The srcset is a new attribute for use in the <img> elements. Its value is a comma separated list of images for the browser to choose from. An simple example is shown below :

<img srcset="brain-low-res.jpg 1x, brain-hi-res 2x, width="320"
alt="Brain Structure">

We tell the browser that there is an image to be rendered at 320 CSS pixels wide. If the device has a normal 1x screen, a low resolution image 320 x 240 pixels is loaded. I the device has a pixel ratio of 2 or more, a higher resolution image 640 x 480 pixels is requested from the server by the browser.

Here comes a second example :

<img src="brain.jpg" sizes="75vw" 
srcset="brain-small.jpg 320w, brain-medium.jpg 640w, 
brain-large.jpg 1024w,brain-xlarge.jpg 2000w" 
alt="Brain Structure">

The srcset attribute tells the browser which images are available with their respective pixel widths. It’s up to the browser to figure out which image to load, depending on the viewport width, the pixel ratio, the network speed or anything else the browser feels is relevant.

The sizes attribute tells the browser that the image should be displayed at 75% of the viewport width. The sizes attribute is however more powerful than indicating default length values. The format is :

sizes="[media query] [length], [media query] [length] ... etc"

Media queries are paired with lengths. Lengths can be absolute (pixel, em) or relative (vw). The next exampe shows a use-case:

<img src="brain.jpg" 
sizes="(min-width:20em) 240px,(min-width:48em) 80vw, 65vw"
srcset="brain-small.jpg 320w, brain-medium.jpg 640w, 
brain-large.jpg 1024w,brain-xlarge.jpg 2000w" 
alt="Brain Structure">

We tell the browser that in viewports between 0 and 20 em wide the image should be displayed  240 pixels wide, in viewports between 20 em and 48 em wide the image should take up 80% of the viewport and in larger viewports the image should be 65% wide.

Can I use responsive images ?

The support of the <picture> element and the srcset and sizes attributes in the various browsers can be checked at the “Can I Use” website. This site was built and is managed by Alexis Deveria, it provides up-to-date support tables of front-end web technologies on desktop and mobile browsers.

Support of the picture element in browsers

Support of the picture element in browsers (Can I Use Website)

Support of the srcset attribute in browsers

Support of the srcset attribute in browsers (Can I User Website)

Actually the <picture> element is only supported by Firefox version 33. The srcset attribute is only supported by Firefox versions >32, Chrome versions > 34, Safari version 8 and Opera versions > 22.

PictureFill

The poor support of the <picture> element and the srcset attribute in actual browsers does not mean that you have to wait before implementing responsive images in your website. Scott Jehl from Filament Group developped a great polyfill called PictureFill supporting the <picture> element and the srcset and sizes attributes.

Initialization code :

<script>
// Picture element HTML5 shiv
document.createElement( "picture" );
</script>
<script src="picturefill.js" async></script>

<picture> code :

<picture>
<!--[if IE 9]><video style="display: none;"><![endif]-->
<source srcset="brain-xx.jpg" media="(min-width: 1000px)">
<source srcset="brain-x.jpg" media="(min-width: 800px)">
<source srcset="brain.jpg">
<!--[if IE 9]></video><![endif]-->
<img srcset="brain.jpg" alt="Brain Structure">
</picture>

If JavaScript is disabled, PictureFill only offers alt text as a fallback. PictureFill supports SVG and WebP types on any source element, and will disregard a source if its type is not supported. To support IE9, a video element is wrapped around the source elements using conditional comments.scrset code :

<img sizes="(min-width: 40em) 80vw, 100vw"
srcset="brain-s.jpg 375w,brain.jpg 480w,brain-x.jpg 768w" 
alt="Brain Structure">

The PictureFill syntax is not quite the same as the specification. The fallback src attribute was intentionally removed to prevent images from being downloaded twice.

CSS4 image-set

By using the CSS4 image-set function, we can insert multiple images which will be set for normal and high-resolution displays. The image-set function is declared within the background-image property, while the background URL is added within the function followed by the resolution parameter (1x for normal display and 2x is for high-res display), like so :

.selector { 
 background-image: image-set(url('image-1x.jpg') 1x, 
 url('image-2x.jpg') 2x); 
} 

The CSS4 image-set function is also trying to deliver the most appropriate image resolution based on the connection speed. So, regardless of the screen resolution, if the user accesses the image through a slow Internet connection, the smaller-sized image will be delivered.

CSS4 image-set is still experimental. It is only supported in Safari 6 and Google Chrome 21 where it is prefixed with -webkit.

HTTP2 client hints

The responsive image standards leave the burden to create images at appropriate sizes, resolutions and formats to the web developer. Client hints are a way to offload this work to the server. Client hints are HTTP headers that give the server some information about the device and the requested resource. Ilya Grigorik, web performance engineer and developer advocate at Google, submitted in December 2013 an Internet Draft “HTTP client hints” to the Internet Network Working Group of the Internet Engineering Task Force (IETF). The draft specifies two new headers for the HTTP 2.0 version : CH-DPR for device pixel ratio and CH-RW for resource width. A server-side script will generate the best image for the requesting device and deliver it.

New image formats

There are some new image formats like JPEG2000, JPEG XR and WebP that generate higher quality images with smaller file sizes, but they aren’t widely supported. JPEG 2000 is scalable in nature, meaning that it can be decoded in a number of ways. By truncating the codestream at any point, one may obtain a representation of the image at a lower resolution. But the web already has this type of responsive image format, which is progressive JPEG, if we get the browsers to download only the neccesary bytes of the picture (i.e. with the byte range HTTP header). The main problem is that the new image formats will take long to implement and deploy, and will have no fallback for older browsers.

Links

The following list provides links to websites with additional informations about <picture>, srcset, PictureFill and related topics :

Responsive iFrames and Image Maps

Last update : June 27, 2014

Some HTML elements don’t work with responsive layouts. Among these are iFrames, which you may need to use when embedding content from external sources. Other elements are Image Maps which are lists of coordinates relating to a specific image, created in order to hyperlink areas of the image to different destinations.

Responsive iFrames

When you embed content from an external source with an iFrame, you must include width and height attributes. Wihtout these parameters, the iframe will disappear because it would have no dimensions. Unfortunaltely you can’t fix this in your css style sheet.

To make embedded content responsive, you need to add a containing wrapper around the iframe :
<div class="iframe_container">
<iframe src="http://www.yoursite.com/yourpage.html" width="640" height="480">
</iframe>
</div>

The containing wrapper is styled with the .iframe_container class in the style sheet :
.iframe_container {
position: relative;
padding-bottom: 75%;
height: 0;
overflow: hidden;
}

Setting the position to relative lets us use absolute positioning for the iframe itself. The padding-bottom value is calculated out of the aspect ratio of the iFrame, which in this case is 480 / 640 = 75%. The height is set to 0 because padding-bottom gives the element the height it needs. The width will automatically resize with the responsive element included in the wrapping div. Setting overflow to hidden ensures that any content flowing outside of this element will be hidden from view.

The iFrame itself is styled with the following CSS code :
.iframe_container iframe {
position: absolute;
top:0;
left: 0;
width: 100%;
height: 100%;
}

Absolute positioning must be used because the containing element has a height of 0. The top and left properties position the iFrame correctly in the containing element. The width and height properties ensure that the iFrame takes up 100% of the space used by the containing element set with padding.

Responsive Image Maps

Image maps are a co-ordinate representations of images in sections mostly in rectangular, poly and circle format. According to the specs percent values can be used for coordinates, but no major browsers understand them correctly and all interpret coordinates as pixel coordinates. The result is that image maps applied to responsive images don’t work as expected when images are resized. It’s necessary to recalculate the area coordinates to match the actual image size.

There are different solutions available to make Image Maps responsive :

The following demo shows a responsive Image Map embedded in a responsive iFrame :
[HTML2]

Links

The list below shows links to websites sharing additional informations about responsive iFrames and Image Maps :

 

Wearable Technology

The term Wearable technology refers to clothing and accessories incorporating computer and advanced electronic technologies. The designs often incorporate practical functions and features, but may also have a purely critical or aesthetic agenda.

Other terms used are wearable devices, wearable computers or fashion electronics. A healthy debate is emerging over whether wearables are best applied to the wrist, to the face or in some other form.

Smart watches

A smart watch is a computerized wristwatch with functionality that is enhanced beyond timekeeping. A first digital watch was already launched in 1972, but the production of real smart watches started only recently. The most notable smart watches which are currently available or announced are listed below :

 

Android Wear

Android Wear

On March 18, 2014, Google officially announced Android’s entrance into wearables with the project Android Wear. Watches powered by Android Wear bring you :

  • Useful information when you need it most
  • Straight answers to spoken questions
  • The ability to better monitor your health and fitness
  • Your key to a multiscreen world

An Android Wear Developer Preview is already available. It lets you create wearable experiences for your existing Android apps and see how they will appear on square and round Android wearables. Late 2014, the Android Wear SDK will be launched enabling even more customized experiences.

Google Glass

 

Google Glass

Google Glass

Google Glass is a wearable computer with an optical head-mounted display (OHMD). Wearers communicate with the Internet via natural language voice commands. In the summer of 2011, Google engineered a prototype of its glass. Google Glass became officially available to the general public on May 15, 2014, for a price of $1500 (Open beta reserved to US residents). Google provides also four prescription frames for about $225. Apps for Goggle Glass are called Glassware.

Tools, patterns and documentation to develop glassware are available at Googles Glass developer website. An Augmented Reality SDK for Google Glass is available from Wikitude.

Smart Shirts

Smart shirts, also known as electronic textiles (E-textiles) are clothing made from smart fabric and used to allow remote physiological monitoring of various vital signs of the wearer such as heart rate, temperature etc. E-textiles are distinct from wearable computing because emphasis is placed on the seamless integration of textiles with electronic elements like microcontrollers, sensors, and actuators. Furthermore, E-textiles need not be wearable. They are also found in interior design, in eHealth or in baby brathing monitors.

At the Recode Event 2014, Intel recently announced its own smart shirt which uses embedded smart fibers that can tell you things about your heart rate or other health data.

VanGoYourself

VanGoYourself : The Last Supper, Leonardo da Vinci

VanGoYourself : The Last Supper, Leonardo da Vinci

La plateforme VanGoYourself permet à toute personne, partout dans le monde, de recréer des œuvres d’art de la Grande Région. Environ 50 peintures de plus de dix collections de sept pays européens peuvent être reproduites sur VanGoYourself.

Les meilleures récréations ont été publiées sur le site web www.vangoyourself.com et toutes les soumissions peuvent être consultées sur vangoyourself.tumblr.com.

VanGoYourself est une innovation Europeana et est le fruit d’une collaboration européenne dans le cadre du projet « Europeana Creative ». Le concept de VanGoYourself est né de la volonté de deux organisations à but non lucratif : Culture24 en Angleterre et Plurio.net au Luxembourg. Toutes deux sont engagées pour étendre la visibilité des arts et de la culture.

Stop killing my iPhone battery

Last Update : January 19, 2017

One of the biggest complaints about the Apple mobile operating system iOS7 is how easily it drains your iPhone battery. Here are a few quick fixes to keep iOS 7 devices powered for much longer :

  • disable the Background App refresh (actualisation en arrière plan)
  • turn off Location Services completely or disable certain apps one by one
  • reduce the motion of the user interface in accessibility (set parameter to “on”)
  • disable the automatic updates option
  • turn off AirDrop
  • turn off all notifications for unnecessary apps
  • turn off unnecessary system services
  • disable Auto-Brightness and decrease the setting manually
  • disable what you don’t need in Apple’s internal search functionality called Spotlight
  • close open apps : you can can close multiple apps at once by double clicking the home button to reveal open webpages and platforms, then swipe up to three apps at the same time by using three fingers and dragging them upwards.

The following list provides links to additional informations about the iPhone battery power-saving options :

Sony SmartWatch 2

Last update : May 28, 2014

Sony SmartWatch 2

Sony SmartWatch 2

The Sony SmartWatch 2, also known as SW2, is a wearable device launched in late September 2013. The SW2 connects to an Android 4.0 (and higher) smartphone using Bluetooth, and supports NFC for easy pairing. The display is a transflective LCD screen with a 220×176 resolution.

Sony SmartWatch 2 Usage

Sony SmartWatch 2 Homescreen

Sony SmartWatch 2 Homescreen

To set up your SmartWatch 2 device, you need first to install the Smart Connect (formerly Liveware Manager) app (last update May 8, 2014) on your phone and to pair it with your phone using a Bluetooth connection. The next step is to install the official Sony SmartWatch app (last update May 6, 2014). This app is not visible in the phone’s home screen, but integrated in the Smart Connect app and in the phone’s status bar. The app allows to edit settings, select a watch interface and to find/enable/disable SmartWatch app extensions.

Sony SmartWatch 2 Extensions

Some useful free extensions for the watch are listed below :

Some useful paid extensions are listed hereafter :

Sony SmartWatch 2 Developement

The Sony Developer World website provides SDK’s, tutorials, tips, tools and documentation how to create  SmartWatch 2 app extensions. A comprehensive Knowledge Base is available to provide more informations about these topics.

Sony SmartWatch 2 on Blackberry Z10

To run the SmartWatch 2 application in the Blackberry Android 4.3 Runtime Player, you need to do a small modification in the MANIFEST.xml file of the SmartWatch 2 app and its extensions and to resign the Smart Connect app with the same key. See my separate post about this subject for additional informations.

Face Recognition Tests

Referring to my recent post about Face Recognition Systems, I did some trials with my “About” photo. Here are the results of my Face Recognition Tests :

Animetrics

Face Detection Tests : Animetrics

Face Detection Tests : Animetrics

{"images": [
{"time": 4.328,
"status": "Complete",
"url": "http://www.web3.lu/download/Marco_Barnig_529x529.jpg",
"width": 529,
"height": 529,
"setpose_image": "http://api.animetrics.com/img/setpose/d89864cc3aaab341d4211113a8310f9a.jpg",
"faces": [
{"topLeftX": 206,
"topLeftY": 112,
"width": 82,
"height": 82,
"leftEyeCenterX": 227.525,
"leftEyeCenterY": 126.692,
"rightEyeCenterX": 272.967,
"rightEyeCenterY": 128.742,
"noseTipX": 252.159,
"noseTipY": 158.973,
"noseBtwEyesX": 251.711,
"noseBtwEyesY": 126.492,
"chinTipX": -1,
"chinTipY": -1,
"leftEyeCornerLeftX": 219.005,
"leftEyeCornerLeftY": 126.308,
"leftEyeCornerRightX": 237.433,
"leftEyeCornerRightY": 127.85,
"rightEyeCornerLeftX": 262.995,
"rightEyeCornerLeftY": 129.004,
"rightEyeCornerRightX": 280.777,
"rightEyeCornerRightY": 129.094,
"rightEarTragusX": -1,
"rightEarTragusY": -1,
"leftEarTragusX": -1,
"leftEarTragusY": -1,
"leftEyeBrowLeftX": 211.478,
"leftEyeBrowLeftY": 120.93,
"leftEyeBrowMiddleX": 226.005,
"leftEyeBrowMiddleY": 117.767,
"leftEyeBrowRightX": 241.796,
"leftEyeBrowRightY": 120.416,
"rightEyeBrowLeftX": 264.142,
"rightEyeBrowLeftY": 121.101,
"rightEyeBrowMiddleX": 278.625,
"rightEyeBrowMiddleY": 119.38,
"rightEyeBrowRightX": 290.026,
"rightEyeBrowRightY": 124.059,
"nostrilLeftHoleBottomX": 243.92,
"nostrilLeftHoleBottomY": 168.822,
"nostrilRightHoleBottomX": 257.572,
"nostrilRightHoleBottomY": 170.683,
"nostrilLeftSideX": 236.867,
"nostrilLeftSideY": 163.555,
"nostrilRightSideX": 262.073,
"nostrilRightSideY": 165.049,
"lipCornerLeftX": -1,
"lipCornerLeftY": -1,
"lipLineMiddleX": -1,
"lipLineMiddleY": -1,
"lipCornerRightX": -1,
"lipCornerRightY": -1,
"pitch": -6.52624,
"yaw": -6.43,
"roll": 2.35988
}]}]}

APICloudMe

Face Recognition Tests : APICloudMe FaceRect and FaceMark

Face Recognition Tests : APICloudMe FaceRect and FaceMark

{"faces" : [
{"orientation" : "frontal",
"landmarks" : [
{"x" : 193,"y" : 125},
{"x" : 191,"y" : 145},
{"x" : 192,"y" : 163},
{"x" : 196,"y" : 178},
{"x" : 206,"y" : 194},
{"x" : 218,"y" : 204},
{"x" : 229,"y" : 206},
{"x" : 243,"y" : 209},
{"x" : 259,"y" : 206},
{"x" : 268,"y" : 202},
{"x" : 278,"y" : 195},
{"x" : 287,"y" : 182},
{"x" : 292,"y" : 167},
{"x" : 296,"y" : 150},
{"x" : 297,"y" : 129},
{"x" : 284,"y" : 112},
{"x" : 279,"y" : 108},
{"x" : 268,"y" : 110},
{"x" : 263,"y" : 116},
{"x" : 270,"y" : 113},
{"x" : 277,"y" : 111},
{"x" : 214,"y" : 111},
{"x" : 223,"y" : 107},
{"x" : 234,"y" : 110},
{"x" : 238,"y" : 115},
{"x" : 232,"y" : 113},
{"x" : 223,"y" : 110},
{"x" : 217,"y" : 127},
{"x" : 228,"y" : 121},
{"x" : 236,"y" : 129},
{"x" : 227,"y" : 131},
{"x" : 227,"y" : 126},
{"x" : 280,"y" : 129},
{"x" : 271,"y" : 123},
{"x" : 262,"y" : 130},
{"x" : 271,"y" : 133},
{"x" : 271,"y" : 127},
{"x" : 242,"y" : 128},
{"x" : 238,"y" : 145},
{"x" : 232,"y" : 157},
{"x" : 232,"y" : 163},
{"x" : 247,"y" : 168},
{"x" : 262,"y" : 164},
{"x" : 262,"y" : 158},
{"x" : 258,"y" : 146},
{"x" : 256,"y" : 129},
{"x" : 239,"y" : 163},
{"x" : 256,"y" : 164},
{"x" : 221,"y" : 179},
{"x" : 232,"y" : 178},
{"x" : 240,"y" : 179},
{"x" : 245,"y" : 180},
{"x" : 251,"y" : 180},
{"x" : 259,"y" : 180},
{"x" : 269,"y" : 182},
{"x" : 261,"y" : 186},
{"x" : 253,"y" : 189},
{"x" : 245,"y" : 189},
{"x" : 236,"y" : 187},
{"x" : 229,"y" : 184},
{"x" : 235,"y" : 182},
{"x" : 245,"y" : 184},
{"x" : 255,"y" : 184},
{"x" : 254,"y" : 183},
{"x" : 245,"y" : 183},
{"x" : 235,"y" : 182},
{"x" : 245,"y" : 183},
{"x" : 249,"y" : 160}
]}],
"image" : {
"width" : 529,
"height" : 529
}}

Betaface API

Image ID : 65fe585d-e565-496e-ab43-bcfdc18c7918
Faces : 2

Face detection Test : Betaface API

Face Recognition Tests : Betaface API

hair color type: red (24%), gender: male (52%), age: 49 (14%), ethnicity: white (57%), smile: yes (15%), glasses: yes (48%), mustache: yes (46%), beard: yes (35%)
500: HEAD Height/Width level parameter (POS = long narrow face NEG = short wide face)(min -2 max 2): 1
501: HEAD TopWidth/BottomWidth level parameter (POS = heart shape NEG = rectangular face)(min -2 max 2):0
502: NOSE Height/Width level parameter (NEG = thinner) (min -2 max 2) : 2
503: NOSE TopWidth/BottomWidth level parameter (NEG = wider at the bottom) (min -2 max 2) : 1
504: MOUTH Width level parameter (min -2 max 2) : 1
505: MOUTH Height level parameter (NEG = thin) (min -2 max 2) : 1
521: MOUTH Corners vertical offset level parameter (NEG = higher) (min -2 max 2) : -2
506: EYES Height/Width level parameter (NEG = thinner and wider, POS = more round) (min -2 max 2) : -1
507: EYES Angle level parameter (NEG = inner eye corners moved towards mouth) (min -2 max 2) : 1
517: EYES closeness level parameter (NEG = closer) (min -2 max 2) : 0
518: EYES vertical position level parameter (NEG = higher) (min -2 max 2) : 0
508: HAIRSTYLE Sides thickness level parameter (min 0 max 3) : 0
509: HAIRSTYLE Hair length level parameter (min 0 max 5) : 0
510: HAIRSTYLE Forehead hair presence parameter (min 0 max 1) : 1
511: HAIRSTYLE Hair Top hair amount level parameter (min 0 max 4) : 3
512: FACE HAIR Mustache level parameter (min 0 max 2) : 0
513: FACE HAIR Beard level parameter (min 0 max 2) : 0
514: GLASSES presence level parameter (min 0 max 1) : 0
515: EYEBROWS thickness level parameter (min -2 max 2) : -2
516: EYEBROWS vertical pos level parameter (POS = closer to the eyes) (min -2 max 2) : -2
520: EYEBROWS Angle level parameter(NEG = inner eyebrows corners moved towards mouth)(min -2 max 2) :-2
519: TEETH presence level parameter (min 0 max 1) : 1
522: NOSE-CHIN distance level parameter (min -2 max 2) : 0
620756992: face height/face width ratio / avg height/width ratio : 1.0478431040781575
620822528: face chin width/face width ratio / avg height/width ratio : 1.0038425243863847
620888064: face current eyes distance/ avg eyes distance ratio : 1.0104771666577224
620953600: eyes vertical position - avg position, minus - higher : -0.00089347261759175321
621019136: distance between chin bottom and low lip / avg distance : 0.97106500562603393
621084672: distance between nose bottom and top lip / avg distance : 1.0075242288018134
621150208: distance between nose top and bottom / avg distance : 1.0619860919447868
621215744: distance between nose left and right / avg distance : 1.0426301239394231
621281280: distance between left mouth corner and right mouth corner / avg distance : 1.0806991515139102
621346816: eyebrows thichkness / avg thichkness : 0.83331489266473235
621412352: ratio (low nose part width / top nose part width) / avg ratio : 0.9717897529241869
621477888: eye height/width ratio / avg height/width ratio : 0.9611420163590253
621543424: width of the chin / avg width of the chin : 0.96738062415147075
621608960: angle of the eyes in degrees - avg angle. Negative angle mean inner eye corners moved towards mouth from average position : -0.35247882153940435
621674496: distance between eyebrows and eyes / avg distance : 0.88418599076781756
621740032: face width / avg width ratio : 0.96367766920692888
621805568: skin color (Weight) (min 0 max 1) : 1.340999960899353
621871104: skin color (H) (min 0 max 180) : 7
621936640: skin color (S) (min 0 max 255) : 81
622002176: skin color (V) (min 0 max 255) : 208
622067712: skin color (R) (min 0 max 255) : 208
622133248: skin color (G) (min 0 max 255) : 157
622198784: skin color (B) (min 0 max 255) : 142
622264320: mustache color if detected (Weight) (min 0 max 1) : 0
622329856: mustache color if detected (H) (min 0 max 180) : 0
622395392: mustache color if detected (S) (min 0 max 255) : 0
622460928: mustache color if detected (V) (min 0 max 255) : 0
622526464: mustache color if detected (R) (min 0 max 255) : 0
622592000: mustache color if detected (G) (min 0 max 255) : 0
622657536: mustache color if detected (B) (min 0 max 255) : 0
622723072: beard color if detected (Weight) (min 0 max 1) : 0
622788608: beard color if detected (H) (min 0 max 180) : 0
622854144: beard color if detected (S) (min 0 max 255) : 0
622919680: beard color if detected (V) (min 0 max 255) : 0
622985216: beard color if detected (R) (min 0 max 255) : 0
623050752: beard color if detected (G) (min 0 max 255) : 0
623116288: beard color if detected (B) (min 0 max 255) : 0
623181824: weight of teeth color (Weight) (min 0 max 1) : 0.4440000057220459
623247360: glasses detection (weight floating value, related to thickness of rim/confidence) (min 0.03 max 1) : 0.065934065934065936
623312896: color of the hair area (Weight) (min 0 max 1) : 0.23899999260902405
623378432: color of the hair area (H) (min 0 max 180) : 4
623443968: color of the hair area (S) (min 0 max 255) : 151
623509504: color of the hair area (V) (min 0 max 255) : 130
623575040: color of the hair area (R) (min 0 max 255) : 130
623640576: color of the hair area (G) (min 0 max 255) : 63
623706112: color of the hair area (B) (min 0 max 255) : 53
673513472: eyebrows angle. Negative angle mean inner eyebrow corners moved towards mouth from average position : 0.086002873281683989
673579008: mouth corners Y offset - avg offset : -0.12499242147802289
673644544: mouth height / avg height : 1.1755344432588537
673710080: nose tip to chin distance / avg distance : 1.0093704038280917

BioID

Face Recognition Tests : BioID

Face Recognition Tests : BioID

<?xml version="1.0" encoding="utf-16"?>
<OperationResults xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.bioid.com/2012/02/BWSMessages">
  <JobID>955410d9-eb2c-43db-b3ca-deeedcd665af</JobID>
  <Command>QualityCheck</Command>
  <Succeeded>true</Succeeded>
  <Samples>
    <Sample Trait="Face" Suitable="true">
      <Errors>
        <Error>
          <Code>ImageTooSmall</Code>
          <Message>The part of the image containing the found face is too small.</Message>
          <Details>The found face (with an eye-distance of 43 pixels) does not have the required eye-distance of at least 240 pixels.</Details>
        </Error>
        <Error>
          <Code>ImageTooSmall</Code>
          <Message>The part of the image containing the found face is too small.</Message>
          <Details>The cropped face image (with 146 x 188 pixels) does not have the minimum expected resolution of 827 x 1063 pixels.</Details>
        </Error>
        <Error>
          <Code>FaceAsymmetry</Code>
          <Message>It seems that the face of the found person is somehow asymmetric, maybe due to bad illumination and/or due to a wrong pose.</Message>
          <Details>An asymmetry of 88.20 was calculated, where only a value up to 50.00 is allowed.</Details>
        </Error>
        <Error>
          <Code>MissingTimeStamp</Code>
          <Message>The image does not have any tag attached which could be used to find out when it was taken. It cannot be assured that the image is not older than 183 days.</Message>
          <Details />
        </Error>
        <Error>
          <Code>ImageOverExposure</Code>
          <Message>The image is over-exposed, i.e. it has too many very light pixels.</Message>
          <Details>The amount of very bright pixels is 1.34%, where only 1.00% are allowed.</Details>
        </Error>
      </Errors>
      <Tags>
        <RightEye X="52.174" Y="84.108" />
        <LeftEye X="95.448" Y="86.173" />
      </Tags>
    </Sample>
  </Samples>
  <Statistics>
    <ProcessingTime>00:00:01.3642947</ProcessingTime>
    <TotalServiceTime>00:00:01.6941376</TotalServiceTime>
  </Statistics>
</OperationResults>

BiometryCloud

No demo app available.

HP Labs Multimedia Analytical Platform

Face Recognition Tests : HP Labs Multimedia

Face Recognition Tests : HP Labs Multimedia Analytical Platform

{
"pic":{
"id_pic":"8f2cb88987e9e6b88813c5c17599204a25a8d63b",
"height":"529",
"width":"529"
},
"face":[{
"id_face":"2500002",
"id_pic":"8f2cb88987e9e6b88813c5c17599204a25a8d63b",
"bb_left":"209",
"bb_top":"109",
"bb_right":"290",
"bb_bottom":"190"
}]}

Lambda Labs Face

Face Recognition Tests : Lambda Labs

Face Recognition Tests : Lambda Labs

{
"status": "success",
"images": ["http://www.web3.lu/download/Marco_Barnig_529x529.jpg"],
"photos": [{"url": "http://www.web3.lu/download/Marco_Barnig_529x529.jpg",
"width": 529,
"tags": [
{"eye_left": {"y": 128,"x": 269},
"confidence": 0.978945010372561,
"center": {"y": 143,"x": 250},
"mouth_right": {"y": 180,"x": 267},
"mouth_left": {"y": 180,"x": 220},
"height": 128,"width": 128,
"mouth_center": {"y": 180,"x": 243.5},
"nose": {"y": 166,"x": 250},
"eye_right": {"y": 129,"x": 231},
"tid": "31337",
"attributes": [{"smile_rating": 0.5,"smiling": false,"confidence": 0.5},
{"gender": "male","confidence": 0.6564017215167235}],
"uids": [
{"confidence": 0.71,"prediction": "TigerWoods","uid": "TigerWoods@CELEBS"},
{"confidence": 0.258,"prediction": "ArnoldS","uid": "ArnoldS@CELEBS"}]}],
"height": 529
}]}

Orbeus ReKognition

Face Recognition Tests : Orbeus ReKognition

Face Recognition Tests : Orbeus ReKognition

{
"url" : "base64_ZNlBfC.jpg",
"face_detection" : [{
"boundingbox" : {
"tl" : {"x" : "188.46","y" : "80.77"},
"size" : {"width" : "126.15","height" : "126.15"}},
"confidence" : "0.98",
"name" : "mark_zuckerberg:0.37,obama:0.33,brad_pitt:0.16,",
"matches" : [{"tag" : "mark_zuckerberg","score" : "0.37"},
{"tag" : "obama","score" : "0.33"},
{"tag" : "brad_pitt","score" : "0.16"}],
"eye_left" : {"x" : "229.6","y" : "125.6"},
"eye_right" : {"x" : "272.5","y" : "127.7"},
"nose" : {"x" : "252.4","y" : "161.5"},
"mouth l" : {"x" : "226.9","y" : "173"},
"mouth_l" : {"x" : "226.9","y" : "173"},
"mouth r" : {"x" : "266.1","y" : "175.6"},
"mouth_r" : {"x" : "266.1","y" : "175.6"},
"pose" : {"roll" : "1.68","yaw" : "19.83","pitch" : "-11.7"},
"b_ll" : {"x" : "211.6","y" : "118"},
"b_lm" : {"x" : "226.7","y" : "113.2"},
"b_lr" : {"x" : "242.2","y" : "116.1"},
"b_rl" : {"x" : "263.5","y" : "117.1",
"b_rm" : {"x" : "277.8","y" : "115.5"},
"b_rr" : {"x" : "290.5","y" : "120"},
"e_ll" : {"x" : "221.5","y" : "125.7"},
"e_lr" : {"x" : "237.4","y" : "126.7"},
"e_lu" : {"x" : "229.9","y" : "122.5"},
"e_ld" : {"x" : "229.4","y" : "128.1"},
"e_rl" : {"x" : "265.3","y" : "128.2"},
"e_rr" : {"x" : "279.5","y" : "128.4"},
"e_ru" : {"x" : "272.5","y" : "124.8"},
"e_rd" : {"x" : "272.5","y" : "130.1"},
"n_l" : {"x" : "240.3","y" : "161.2"},
"n_r" : {"x" : "259.7","y" : "163.8"},
"m_u" : {"x" : "248.2","y" : "174.2"},
"m_d" : {"x" : "246.3","y" : "189.3"},
"race" : {"white" : "0.56"},
"age" : "48.28",
"glasses" : "1",
"eye_closed" : "0",
"mouth_open_wide" : "0.77",
"sex" : "0.96"},
{"boundingbox" : {
"tl" : {"x" : "19.23","y" : "221.54"},
"size" : {"width" : "160","height" : "160"}},
"confidence" : "0.07",
"name" : "obama:0.03,brad_pitt:0.02,jim_parsons:0.01,",
"matches" : [
{"tag" : "obama","score" : "0.03"},
{"tag" : "brad_pitt","score" : "0.02"},
{"tag" : "jim_parsons","score" : "0.01"}
],
"eye_left" : {"x" : "93.7","y" : "257.9"},
"eye_right" : {"x" : "128.4","y" : "309.1"},
"nose" : {"x" : "95.8","y" : "299.5"},
"mouth l" : {"x" : "58.9","y" : "306.9"},
"mouth_l" : {"x" : "58.9","y" : "306.9"},
"mouth r" : {"x" : "94.1","y" : "350.9"},
"mouth_r" : {"x" : "94.1","y" : "350.9"},
"pose" : {"roll" : "59.26","yaw" : "-11.22","pitch" : "9.96"},
"b_ll" : {"x" : "102.4","y" : "227.3"},
"b_lm" : {"x" : "114.9","y" : "240.8"},
"b_lr" : {"x" : "119.5","y" : "259.5"},
"b_rl" : {"x" : "133.9","y" : "282.1"},
"b_rm" : {"x" : "147.7","y" : "295.7"},
"b_rr" : {"x" : "153.8","y" : "312.3"},
"e_ll" : {"x" : "88.2","y" : "248.3"},
"e_lr" : {"x" : "100.2","y" : "267.6"},
"e_lu" : {"x" : "94.2","y" : "257.4"},
"e_ld" : {"x" : "92.7","y" : "258.4"},
"e_rl" : {"x" : "122.3","y" : "299.1"},
"e_rr" : {"x" : "134.7","y" : "319.7"},
"e_ru" : {"x" : "129.5","y" : "308.5"},
"e_rd" : {"x" : "127.2","y" : "309.6"},
"n_l" : {"x" : "78.7","y" : "298.6"},
"n_r" : {"x" : "97.8","y" : "318.4"},
"m_u" : {"x" : "80.4","y" : "321.2"},
"m_d" : {"x" : "72.6","y" : "328.5"},
"race" : {"black" : "0.63"},
"age" : "23.07",
"glasses" : "0.98",
"eye_closed" : "0.9",
"mouth_open_wide" : "0.46",
"sex" : "0.66"
}],
"ori_img_size" : {
"width" : "529",
"height" : "529"
},
"usage" : {
"quota" : "-10261829",
"status" : "Succeed.",
"api_id" : "4321"
}
}

Sky Biometry

Face Recognition Tests : SkyBiometry

Face Recognition Tests : SkyBiometry

face: (85%)
gender: male (86%)
smiling: true (100%)
glasses: true (57%)
dark glasses: false (21%)
eyes: open (80%)
mood: angry (69%)
N: 0%
A: 69%
D: 40%
F: 0%
H: 23%
S: 0%
SP: 21%
roll: 3
yaw: -14