{"id":93,"date":"2022-12-08T11:02:20","date_gmt":"2022-12-08T11:02:20","guid":{"rendered":"https:\/\/blog.vaniila-ai.catie-na.fr\/?p=93"},"modified":"2022-12-08T11:04:52","modified_gmt":"2022-12-08T11:04:52","slug":"reconnaissance-faciale-a-laide-de-reseaux-de-neurones-siamois","status":"publish","type":"post","link":"https:\/\/blog.vaniila-ai.catie-na.fr\/?p=93","title":{"rendered":"Reconnaissance faciale \u00e0 l\u2019aide de r\u00e9seaux de neurones siamois"},"content":{"rendered":"<p>[et_pb_section fb_built=\u00a0\u00bb1&Prime; custom_padding_last_edited=\u00a0\u00bbon|tablet\u00a0\u00bb admin_label=\u00a0\u00bbHeader\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb use_background_color_gradient=\u00a0\u00bbon\u00a0\u00bb background_color_gradient_stops=\u00a0\u00bbrgba(0,0,0,0) 0%|#000000 86%\u00a0\u00bb background_color_gradient_overlays_image=\u00a0\u00bbon\u00a0\u00bb background_image=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/web-developer-28.jpg\u00a0\u00bb custom_padding=\u00a0\u00bb5%||||false|false\u00a0\u00bb custom_padding_tablet=\u00a0\u00bb60px||||false|false\u00a0\u00bb custom_padding_phone=\u00a0\u00bb60px||||false|false\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; collapsed=\u00a0\u00bbon\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb width=\u00a0\u00bb100%\u00a0\u00bb custom_margin=\u00a0\u00bb-14px||||false|false\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;][et_pb_row _builder_version=\u00a0\u00bb4.18.0&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_column type=\u00a0\u00bb4_4&Prime; _builder_version=\u00a0\u00bb4.18.0&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbe6504a1b-67eb-4b3d-b023-bcab277610b6&Prime; text_font=\u00a0\u00bb|||on|||||\u00a0\u00bb header_4_font=\u00a0\u00bbArchivo|700||on|||||\u00a0\u00bb header_4_text_color=\u00a0\u00bbgcid-f1414204-51c0-48ff-bc68-c545a86d03e7&Prime; header_4_font_size=\u00a0\u00bb14px\u00a0\u00bb header_4_letter_spacing=\u00a0\u00bb1px\u00a0\u00bb header_4_line_height=\u00a0\u00bb1.5em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb custom_margin=\u00a0\u00bb||0px||false|false\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{%22gcid-f1414204-51c0-48ff-bc68-c545a86d03e7%22:%91%22header_4_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h1><strong><span style=\"color: #ffcc00;\">Reconnaissance faciale \u00e0 l\u2019aide de r\u00e9seaux de neurones siamois<\/span><\/strong><\/h1>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb filter_opacity=\u00a0\u00bb75%\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_column type=\u00a0\u00bb4_4&Prime; _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Introduction<\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">La <strong>reconnaissance faciale<\/strong> vise \u00e0 permettre l\u2019<strong>identification automatique de personnes<\/strong> \u00e0 partir d\u2019informations caract\u00e9ristiques extraites de photographies de leur visage. Ces techniques ont consid\u00e9rablement \u00e9volu\u00e9es durant ces trois derni\u00e8res d\u00e9cennies (<a href=\"https:\/\/proceedings.neurips.cc\/paper\/1993\/file\/288cc0ff022877bd3df94bc9360b9c5d-Paper.pdf\">Bromley et al.<\/a> se penchaient d\u00e9j\u00e0 sur ce sujet en1994), en particulier gr\u00e2ce aux apports de l\u2019<strong>intelligence artificielle<\/strong> et notamment de l\u2019<strong>apprentissage profond<\/strong> (<em><strong>deep learning<\/strong><\/em>).<\/p>\n<p style=\"text-align: justify;\">Les <strong>r\u00e9seaux de neurones<\/strong> sont aujourd\u2019hui au c\u0153ur de nombreux dispositifs et \u00e9quipements utilis\u00e9s pour l\u2019identification d\u2019individus. Leur conception et leur int\u00e9gration d\u00e9pend naturellement de l\u2019application envisag\u00e9e et des <strong>ressources mat\u00e9rielles disponibles<\/strong>, ainsi que d\u2019autres param\u00e8tres importants tels que la <strong>disponibilit\u00e9 de jeux de donn\u00e9es pour leur entra\u00eenement<\/strong>.<\/p>\n<p style=\"text-align: justify;\">La reconnaissance faciale est souvent abord\u00e9e comme un <strong>probl\u00e8me de classification<\/strong> o\u00f9 il s\u2019agit de d\u00e9terminer, \u00e0 l\u2019aide d\u2019un r\u00e9seau de neurones, la <strong>classe d\u2019appartenance la plus probable<\/strong> de la photographie du visage d\u2019un individu. Cette approche peut, dans certains cas, poser probl\u00e8me car :<\/p>\n<ul style=\"text-align: justify;\">\n<li>elle n\u00e9cessite de devoir disposer d\u2019un <strong>jeu de donn\u00e9es labellis\u00e9es<\/strong> assez cons\u00e9quent, potentiellement fastidieux \u00e0 constituer et \u00e0 mettre \u00e0 jour<\/li>\n<li>le r\u00e9seau correspondant doit \u00eatre <strong>r\u00e9-entra\u00een\u00e9<\/strong> chaque fois que de nouvelles classes (nouveaux individus \u00e0 identifier) doivent \u00eatre ajout\u00e9es<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">Dans les cas o\u00f9 l\u2019on souhaite pouvoir par exemple reconna\u00eetre \u00e0 la vol\u00e9e de nouveaux individus dans un flux vid\u00e9o, <strong>l\u2019approche par classification se r\u00e9v\u00e8le inadapt\u00e9e<\/strong> et il est donc n\u00e9cessaire de se tourner vers des solutions moins gourmandes en ressources mat\u00e9rielles et en temps de calcul.<\/p>\n<p style=\"text-align: justify;\">Dans ces cas, on privil\u00e9giera la mise en \u0153uvre <strong>d\u2019architectures prenant appui sur des fonctions de \u201ccalcul de similarit\u00e9\u201d<\/strong> que l\u2019on utilisera pour d\u00e9terminer si les photographies de personnes \u00e0 identifier correspondent, ou pas, aux repr\u00e9sentations d\u2019individus connus enregistr\u00e9es dans une base de donn\u00e9es (et qui pourra elle-m\u00eame le cas \u00e9ch\u00e9ant \u00eatre enrichie en temps r\u00e9el, au fur et \u00e0 mesure de la d\u00e9tection de nouveaux visages).<\/p>\n<p style=\"text-align: justify;\">Nous vous proposons ici la description d\u2019une solution de ce type bas\u00e9e sur une <strong>architecture siamoise<\/strong> que nous avons notamment test\u00e9e et mise en \u0153uvre dans le cadre de la <strong><a href=\"https:\/\/www.robocup.org\/domains\/3\">Robocup@Home<\/a><\/strong>, comp\u00e9tition internationale dans le domaine de la robotique de service dans laquelle les robots doivent interagir avec des op\u00e9rateurs humains.<\/p>\n<p style=\"text-align: justify;\">\n<p style=\"text-align: justify;\"><span style=\"color: #ffffff;\"><\/span><\/p>\n<p style=\"text-align: justify;\">\n<p style=\"text-align: justify;\"><span style=\"font-weight: 600; color: #ffffff;\" data-token-index=\"0\" class=\"notion-enable-hover\" data-reactroot=\"\"><\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale.jpg\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_column type=\u00a0\u00bb4_4&Prime; _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Architecture g\u00e9n\u00e9rale<br \/><\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb]<\/p>\n<p style=\"text-align: justify;\">La solution de reconnaissance faciale que nous avons d\u00e9velopp\u00e9e repose sur l\u2019int\u00e9gration d\u2019outils et de r\u00e9seaux de neurones respectivement destin\u00e9s \u00e0 :<\/p>\n<ul style=\"text-align: justify;\">\n<li>d\u00e9tecter les visages d\u2019individus dans une photographie<\/li>\n<li>produire, pour chaque visage isol\u00e9, un \u201cvecteur d\u2019identit\u00e9\u201d \u00e0 64 dimensions le repr\u00e9sentant<\/li>\n<li>calculer la distance entre les vecteurs associ\u00e9s \u00e0 deux clich\u00e9s distincts<\/li>\n<li>et d\u00e9terminer, en parcourant une base de donn\u00e9es, si le vecteur associ\u00e9 \u00e0 un visage est \u201cproche\u201d, ou pas, de celui d\u2019un autre d\u00e9j\u00e0 identifi\u00e9<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">La <strong>d\u00e9tection des visages<\/strong> dans une photographie ou un flux vid\u00e9o, puis leur <strong>extraction<\/strong>, sont effectu\u00e9es \u00e0 l\u2019aide d\u2019outils dont nous parlerons plus loin.<\/p>\n<p style=\"text-align: justify;\">Le c\u0153ur du dispositif est quant \u00e0 lui constitu\u00e9 d\u2019un mod\u00e8le dont la fonction objectif calcule une similarit\u00e9 permettant de d\u00e9terminer si deux photographies de visages se r\u00e9f\u00e8rent, ou non, \u00e0 un m\u00eame individu.<\/p>\n<p style=\"text-align: justify;\">L\u2019architecture mise en \u0153uvre ici est <strong>\u201csiamoise\u201d<\/strong> et fait intervenir deux instances d\u2019un m\u00eame <strong>r\u00e9seau de neurones convolutif<\/strong> prenant chacun en entr\u00e9e une photographie de visage et fournissant en sortie une <strong>repr\u00e9sentation vectorielle<\/strong> de celui-ci en 64 dimensions.<\/p>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 600; color: #ffffff;\" data-token-index=\"0\" class=\"notion-enable-hover\" data-reactroot=\"\"><\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale2-1.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale2&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Le r\u00e9seau convolutif a \u00e9t\u00e9 entra\u00een\u00e9 de mani\u00e8re \u00e0 fournir des <strong>repr\u00e9sentations proches<\/strong>, en distance euclidienne, <strong>pour deux clich\u00e9s de visages de la m\u00eame personne<\/strong> et, inversement, <strong>\u00e9loign\u00e9es ou tr\u00e8s \u00e9loign\u00e9es<\/strong> pour les clich\u00e9s de deux <strong>personnes diff\u00e9rentes<\/strong>.<\/p>\n<p style=\"text-align: justify;\">Les sorties des deux instances du r\u00e9seau (identiques en tous points et partageant donc la m\u00eame configuration et les m\u00eames poids) se rejoignent ensuite et sont alors utilis\u00e9es pour le calcul d\u2019un <strong>score de similarit\u00e9 directement d\u00e9duit de la distance s\u00e9parant les repr\u00e9sentations vectorielles des clich\u00e9s fournis en entr\u00e9e<\/strong>.<\/p>\n<p style=\"text-align: justify;\">Chaque visage d\u00e9tect\u00e9 dans une photographie ou tir\u00e9 d\u2019un flux vid\u00e9o est alors encod\u00e9 par le r\u00e9seau, le vecteur r\u00e9sultant \u00e9tant <strong>compar\u00e9 \u00e0 une s\u00e9rie d\u2019empreintes connues<\/strong> stock\u00e9es dans une base de donn\u00e9es. Le r\u00e9sultat de cette comparaison, retourn\u00e9 sous la forme d\u2019une valeur scalaire (le score de similarit\u00e9 \u00e9voqu\u00e9 pr\u00e9c\u00e9demment), est alors \u00e9valu\u00e9 au regard d\u2019un seuil au del\u00e0 duquel on peut consid\u00e9rer les empreintes <strong>comme \u00e9tant identiques<\/strong> et, par suite, l\u2019individu concern\u00e9 comme \u00e9tant <strong>identifi\u00e9<\/strong>.<\/p>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 600; color: #ffffff;\" data-token-index=\"0\" class=\"notion-enable-hover\" data-reactroot=\"\"><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Caract\u00e9ristiques et entra\u00eenement du r\u00e9seau<\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Le d\u00e9fi consiste ici \u00e0 concevoir et \u00e0 entra\u00eener le r\u00e9seau convolutif de sorte que <strong>des entr\u00e9es similaires soient projet\u00e9s en des endroits relativement proches dans l\u2019espace des repr\u00e9sentations<\/strong> et, inversement, que des <strong>entr\u00e9es diff\u00e9rentes soient projet\u00e9es en des points \u00e9loign\u00e9s<\/strong>.<\/p>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 600; color: #ffffff;\" data-token-index=\"0\" class=\"notion-enable-hover\" data-reactroot=\"\"><\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h3 style=\"text-align: left;\"><em><span style=\"color: #ffffff;\">Jeu de donn\u00e9es utilis\u00e9 et pr\u00e9-traitements<\/span><\/em><\/h3>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale3.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale3&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.18.0&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb max_width=\u00a0\u00bb700px\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_column type=\u00a0\u00bb4_4&Prime; _builder_version=\u00a0\u00bb4.18.0&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">L\u2019entra\u00eenement du r\u00e9seau a \u00e9t\u00e9 r\u00e9alis\u00e9 sur la base du jeu de donn\u00e9es <a href=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/data\/vgg_face2\/\">VGGFace2<\/a> de Cao et al. (2018), un jeu de donn\u00e9es accessible publiquement comportant environ 3,3 millions d\u2019images et se r\u00e9f\u00e9rant \u00e0 plus de 9000 personnes.<\/p>\n<p style=\"text-align: justify;\">Les images tir\u00e9es de ce jeu pr\u00e9sentant une grande variabilit\u00e9 dans les poses, \u00e2ges des sujets, expositions, etc. ont \u00e9t\u00e9 <strong>normalis\u00e9es<\/strong> de mani\u00e8re \u00e0 identifier les visages et \u00e0 positionner les points caract\u00e9ristiques de ceux-ci (yeux, nez, bouche) en des coordonn\u00e9es identiques quelque soit le clich\u00e9 consid\u00e9r\u00e9.<\/p>\n<p style=\"text-align: justify;\">Cette \u00e9tape de normalisation des images est critique pour les performances du r\u00e9seau. La d\u00e9tection des visages a \u00e9t\u00e9 effectu\u00e9e \u00e0 l\u2019aide d\u2019un r\u00e9seau neuronal <a href=\"https:\/\/arxiv.org\/abs\/1905.00641v2\">RetinaFace<\/a> de Deng et al. (2019) permettant d\u2019identifier une <em>bounding box<\/em> du visage ainsi que les points caract\u00e9ristiques. L\u2019image obtenue \u00e9tant <strong>d\u00e9coup\u00e9e et transform\u00e9e<\/strong> de mani\u00e8re \u00e0 positionner les points caract\u00e9ristiques aux positions pr\u00e9d\u00e9finies.<\/p>\n<p style=\"text-align: justify;\">Le r\u00e9seau convolutif positionn\u00e9 au c\u0153ur de notre dispositif de reconnaissance faciale a alors \u00e9t\u00e9 entra\u00een\u00e9 \u00e0 partir de ces clich\u00e9s.<\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h3 style=\"text-align: left;\"><em><span style=\"color: #ffffff;\">Architecture<\/span><\/em><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Le r\u00e9seau est construit sur la base d\u2019une architecture <a href=\"https:\/\/arxiv.org\/abs\/1905.11946\">EfficientNet-B0<\/a> de Tan et Le (2019), ce choix est un compromis entre les divers contraintes du probl\u00e8me qui nous occupe puisque l\u2019algorithme sera embarqu\u00e9 sur le robot, dans une carte graphique dont les capacit\u00e9s sont limit\u00e9es. Le nombre de param\u00e8tres en m\u00e9moire est contraint et la vitesse d\u2019ex\u00e9cution doit \u00eatre suffisante (la d\u00e9cision doit \u00eatre rapide car les personnes \u00e0 identifier peuvent se d\u00e9placer, par exemple).<\/p>\n<p style=\"text-align: justify;\">Des temps d\u2019inf\u00e9rence relativement courts caract\u00e9risent ce r\u00e9seau (comparativement \u00e0 des r\u00e9seaux plus profonds, certes plus performants mais induisant des temps de traitement significativement plus longs).<\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale4.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale4&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Remarques :<\/p>\n<ul>\n<li style=\"text-align: justify;\">le EfficientNet-B0 est le fruit d\u2019un domaine de recherche qui tient une place importante en apprentissage profond : le NAS (<em>Neural Architecture Search<\/em>), et qui a pour objet d automatiser et optimiser les architectures des r\u00e9seaux utilis\u00e9s. Il a donn\u00e9 lieu \u00e0 de nombreux r\u00e9seaux, dont les plus populaires sont les <a href=\"https:\/\/arxiv.org\/abs\/1704.04861\">MobileNets<\/a> de Howard et al. (2017), <a href=\"https:\/\/arxiv.org\/abs\/1905.11946\">EfficientNet<\/a> (Tan et Le (2019)) ou <a href=\"https:\/\/arxiv.org\/abs\/2201.03545\">ConvNext<\/a> de Liu et al. (2022).<\/li>\n<li style=\"text-align: justify;\">de nos jours les <em>transformers<\/em> pour la vision (<a href=\"https:\/\/arxiv.org\/abs\/2010.11929\">ViT<\/a> de Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zha et al. (2020) comme par exemple <a href=\"https:\/\/arxiv.org\/abs\/2103.14030\">Swin Transformer<\/a> de Liu, Lin, Cao, Hu et al. (2021) sont une alternative aux r\u00e9seaux de neurones convolutifs.<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h3 style=\"text-align: left;\"><em><span style=\"color: #ffffff;\">Choix de la fonction objectif<\/span><\/em><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">L\u2019apprentissage de similarit\u00e9s n\u00e9cessite l\u2019utilisation de fonctions objectif appropri\u00e9es, parmi lesquelles la <a href=\"https:\/\/ieeexplore.ieee.org\/document\/1640964\" style=\"cursor: pointer; color: inherit; word-wrap: break-word; font-style: italic; text-decoration: inherit;\" class=\"notion-link-token notion-enable-hover\" target=\"_blank\" rel=\"noopener noreferrer\" data-token-index=\"1\" data-reactroot=\"\"><span style=\"border-bottom: 0.05em solid; border-color: rgba(55,53,47,0.4); opacity: 0.7;\" class=\"link-annotation-unknown-block-id--783516022\">contrastive loss<\/span><\/a> de Hadsell et al. (2005) et la <a href=\"https:\/\/arxiv.org\/abs\/1503.03832\" style=\"cursor: pointer; color: inherit; word-wrap: break-word; font-style: italic; text-decoration: inherit;\" class=\"notion-link-token notion-enable-hover\" target=\"_blank\" rel=\"noopener noreferrer\" data-token-index=\"3\" data-reactroot=\"\"><span style=\"border-bottom: 0.05em solid; border-color: rgba(55,53,47,0.4); opacity: 0.7;\" class=\"link-annotation-unknown-block-id-1292156574\">triplet loss<\/span><\/a> de Schroff et al. (2015) sont souvent cit\u00e9es en r\u00e9f\u00e9rence dans la litt\u00e9rature.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb column_structure=\u00a0\u00bb2_5,3_5&Prime; custom_margin=\u00a0\u00bb-17px|auto||auto||\u00a0\u00bb custom_padding=\u00a0\u00bb15px|||||\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb2_5&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p><span style=\"color: #ffffff;\">La <em><strong>contrastive loss<\/strong><\/em> est d\u00e9finie par :<\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale5.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale5&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][\/et_pb_column][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb3_5&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p><span style=\"color: #ffffff;\"><em>o\u00f9<\/em> $v_1$ et $v_2$ <em>sont deux vecteurs, $%alpha$<\/em><\/span><\/p>\n<p><span style=\"color: #ffffff;\">$\\alpha$ <em>est un coefficient qui vaut 1 si les deux vecteurs sont de la m\u00eame classe, 0 sinon<\/em><\/span><\/p>\n<p><span style=\"color: #ffffff;\">d <em>est une fonction de distance quelconque<\/em><\/span><\/p>\n<p><span style=\"color: #ffffff;\">$m$ <em>est un r\u00e9el appel\u00e9 la marge<\/em><\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb4_4&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Intuitivement, cette fonction objectif p\u00e9nalise deux vecteurs de la m\u00eame classe par leur distance, tandis que deux vecteurs de classes diff\u00e9rentes ne sont p\u00e9nalis\u00e9s que si leur distance est inf\u00e9rieure \u00e0 <span class=\"notion-text-equation-token\" data-reactroot=\"\" contenteditable=\"false\">m<\/span>.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb column_structure=\u00a0\u00bb2_5,3_5&Prime; custom_margin=\u00a0\u00bb-17px|auto||auto||\u00a0\u00bb custom_padding=\u00a0\u00bb15px|||||\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb2_5&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p><span style=\"color: #ffffff;\">La fonction <span style=\"font-weight: 600;\" data-token-index=\"1\" class=\"notion-enable-hover\" data-reactroot=\"\">triplet loss <\/span>fait quant \u00e0 elle intervenir un troisi\u00e8me vecteur, \u201cl\u2019ancre\u201d, dans son \u00e9quation: <\/span><\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale6.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale6&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][\/et_pb_column][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb3_5&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p><span style=\"color: #ffffff;\"><em>ici,<\/em> $<em>a$ d\u00e9signe l\u2019ancre,<\/em><\/span><\/p>\n<p><span style=\"color: #ffffff;\">$v_1$ <em>est un vecteur de la m\u00eame classe que<\/em> $a$<\/span><\/p>\n<p><span style=\"color: #ffffff;\">**$v_2$ <em>est un vecteur d\u2019une classe diff\u00e9rente de<\/em> $a$.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb][et_pb_column _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb type=\u00a0\u00bb4_4&Prime;][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Cette fonction tend simultan\u00e9ment \u00e0 rapprocher la paire <span class=\"notion-text-equation-token\" data-reactroot=\"\" contenteditable=\"false\"> (a, v_1) <\/span> et \u00e0 \u00e9loigner la paire <span class=\"notion-text-equation-token\" data-reactroot=\"\" contenteditable=\"false\">(a, v_2) <\/span>comme pr\u00e9sent\u00e9 sur la figure suivante :<\/p>\n<p>[\/et_pb_text][et_pb_image src=\u00a0\u00bbhttp:\/\/blog.vaniila-ai.catie-na.fr\/wp-content\/uploads\/2022\/12\/Reconnaissance_faciale7.png\u00a0\u00bb _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb title_text=\u00a0\u00bbReconnaissance_faciale7&Prime; hover_enabled=\u00a0\u00bb0&Prime; sticky_enabled=\u00a0\u00bb0&Prime;][\/et_pb_image][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">De mani\u00e8re g\u00e9n\u00e9rale, l\u2019entra\u00eenement des r\u00e9seaux utilisant directement ces fonctions objectifs est assez co\u00fbteux, la convergence de ce type de syst\u00e8mes \u00e9tant plus longue \u00e0 obtenir que, par exemple, sur de \u201cclassiques\u201d probl\u00e8mes de classification.<\/p>\n<p style=\"text-align: justify;\">Afin de contourner cette difficult\u00e9, nous avons avons adopt\u00e9 une approche alternative consistant en un entra\u00eenement du r\u00e9seau en deux \u00e9tapes.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_column type=\u00a0\u00bb4_4&Prime; _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb global_colors_info=\u00a0\u00bb{}\u00a0\u00bb][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Impl\u00e9mentation et int\u00e9gration<br \/><\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Le dispositif de reconnaissance faciale \u00e9t\u00e9 produit par int\u00e9gration d\u2019outils et de scripts essentiellement cod\u00e9s en langage Python.<\/p>\n<p style=\"text-align: justify;\">Le r\u00e9seau de neurones est lui-m\u00eame impl\u00e9ment\u00e9 \u00e0 l\u2019aide de <a href=\"https:\/\/pytorch.org\/\">PyTorch<\/a> de Paszke, Gross, Chintala, Chanan et al. (2016), et plus pr\u00e9cis\u00e9ment en <a href=\"https:\/\/www.pytorchlightning.ai\/\">Pytorch Lightning<\/a> de Falcon et al. (2019), et entra\u00een\u00e9 avec les ressources de calcul de la plateforme <a href=\"https:\/\/www.vaniila.ai\/\">VANIILA<\/a> du CATIE.<\/p>\n<p style=\"text-align: justify;\">Cela a permis de r\u00e9aliser les entra\u00eenements successifs en un temps raisonnable (moins de deux heures) et les performance obtenues sont apparues tout \u00e0 fait int\u00e9ressantes avec un score F1 de 0.92, ce qui est meilleur que les solutions du commerce test\u00e9es.<\/p>\n<p style=\"text-align: justify;\">[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Conclusion<br \/><\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<p style=\"text-align: justify;\">Nous avons vu comment une premi\u00e8re \u00e9tape d\u2019extraction et d\u2019alignement des visages suivie d\u2019une seconde d\u2019entra\u00eenement d\u2019un r\u00e9seau siamois \u00e0 l\u2019aide d\u2019une fonction de co\u00fbt adapt\u00e9e permet d\u2019appr\u00e9hender une probl\u00e9matique de reconnaissance faciale.<\/p>\n<p style=\"text-align: justify;\">Une des limites de ce genre de technique, trouvables dans d\u2019autres domaines, est la n\u00e9cessit\u00e9 d\u2019un tr\u00e8s grand nombre d\u2019images \u00e9tiquet\u00e9es pour entra\u00eener le mod\u00e8le. Cet \u00e9tiquetage peut \u00eatre tr\u00e8s co\u00fbteux voire impossible. Pour rem\u00e9dier \u00e0 cela de nouvelle m\u00e9thodes bas\u00e9es sur l\u2019apprentissage auto-supervis\u00e9 sont apparues r\u00e9cemment consistant \u00e0 entra\u00eener les mod\u00e8les avec de nombreuses donn\u00e9es qui n\u2019ont pas d\u2019\u00e9tiquettes. Nous d\u00e9velopperons les d\u00e9tails de ces techniques auto-supervis\u00e9es dans un prochain article.<\/p>\n<p style=\"text-align: justify;\">Restez donc \u00e0 l\u2019aff\u00fbt.<\/p>\n<p>[\/et_pb_text][et_pb_divider color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; divider_weight=\u00a0\u00bb4px\u00a0\u00bb _builder_version=\u00a0\u00bb4.18.0&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb max_width=\u00a0\u00bb300px\u00a0\u00bb module_alignment=\u00a0\u00bbcenter\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22color%22%93}\u00a0\u00bb][\/et_pb_divider][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbc16985e1-e0d6-4022-964d-e2bfc04fa633&Prime; header_font=\u00a0\u00bbRoboto|700|||||||\u00a0\u00bb header_text_color=\u00a0\u00bbgcid-6171fd24-b893-4d22-843a-f4129850a5c1&Prime; header_font_size=\u00a0\u00bb75px\u00a0\u00bb header_line_height=\u00a0\u00bb1.2em\u00a0\u00bb text_orientation=\u00a0\u00bbcenter\u00a0\u00bb header_font_size_tablet=\u00a0\u00bb40px\u00a0\u00bb header_font_size_phone=\u00a0\u00bb24px\u00a0\u00bb header_font_size_last_edited=\u00a0\u00bbon|desktop\u00a0\u00bb global_colors_info=\u00a0\u00bb{%22gcid-6171fd24-b893-4d22-843a-f4129850a5c1%22:%91%22header_text_color%22%93}\u00a0\u00bb]<\/p>\n<h2 style=\"text-align: left;\"><span style=\"color: #ffffff;\"><strong>Bibliographie<\/strong><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_text _builder_version=\u00a0\u00bb4.19.2&Prime; _module_preset=\u00a0\u00bbdefault\u00a0\u00bb text_text_color=\u00a0\u00bb#FFFFFF\u00a0\u00bb hover_enabled=\u00a0\u00bb0&Prime; global_colors_info=\u00a0\u00bb{}\u00a0\u00bb sticky_enabled=\u00a0\u00bb0&Prime;]<\/p>\n<ul>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2201.03545\">A ConvNet for the 2020s<\/a> de Liu et al. (2022)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2010.11929\">An Image is Worth 16&#215;16 Words: Transformers for Image Recognition at Scale<\/a> de Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zha et al. (2020)<\/li>\n<li><a href=\"https:\/\/ieeexplore.ieee.org\/document\/1640964\">Dimensionality Reduction by Learning an Invariant Mapping<\/a> de Hadsell et al. (2005)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1905.11946\">EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks<\/a> de Tan et Le (2019)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1503.03832\">FaceNet: A Unified Embedding for Face Recognition and Clustering<\/a> de Schroff et al. (2015)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1704.04861\">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications<\/a> de Howard et al. (2017)<\/li>\n<li><a href=\"https:\/\/github.com\/pytorch\/pytorch\">PyTorch<\/a> de Paszke, Gross, Chintala, Chanan et al. (2016)<\/li>\n<li><a href=\"https:\/\/github.com\/Lightning-AI\/lightning\">Pytorch Lightning<\/a> de Falcon et al. (2019)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/1905.00641v2\">RetinaFace: Single-stage Dense Face Localisation in the Wild<\/a> de Deng et al. (2019)<\/li>\n<li><a href=\"https:\/\/proceedings.neurips.cc\/paper\/1993\/file\/288cc0ff022877bd3df94bc9360b9c5d-Paper.pdf\">Signature Verification using a \u00ab\u00a0Siamese\u00a0\u00bb Time Delay Neural Network<\/a> de Bromley et al. (1994)<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2103.14030\">Swin Transformer: Hierarchical Vision Transformer using Shifted Windows<\/a> de Liu, Lin, Cao, Hu et al. (2021)<\/li>\n<li><a href=\"https:\/\/www.robots.ox.ac.uk\/~vgg\/publications\/2018\/Cao18\/cao18.pdf\">VGGFace2: A dataset for recognising faces across pose and age<\/a> de Cao et al. (2018)<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8211;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>Welcome to WordPress. This is your first post. Edit or delete it, then start writing!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","footnotes":""},"categories":[6],"tags":[14,13,15],"class_list":["post-93","post","type-post","status-publish","format-standard","hentry","category-vision","tag-computer-vision","tag-deap-learning","tag-embarque"],"_links":{"self":[{"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/posts\/93","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=93"}],"version-history":[{"count":3,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/posts\/93\/revisions"}],"predecessor-version":[{"id":106,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=\/wp\/v2\/posts\/93\/revisions\/106"}],"wp:attachment":[{"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=93"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=93"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.vaniila-ai.catie-na.fr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=93"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}