18 Commits

Author SHA1 Message Date
068be82ee3 remove some codes 2020-06-11 11:28:08 +08:00
dced9d9ef7 use mysql instead of csv 2020-06-11 11:23:26 +08:00
fbb1dbd3f5 use mysql instead of csv to store face database 2020-06-11 11:20:00 +08:00
e9b923a864 supporting names in chinese 2020-05-29 10:34:39 +08:00
7599f86d07 remove chinese character
Some issuse in https://github.com/coneypo/Dlib_face_recognition_from_camera/pull/18;
Reset to old version adn will update when the issue fixed;
2020-05-18 17:53:36 +08:00
fefd44a54c Merge pull request #18 from TianleLin/master
Mod: use PIL to make Chinese title availble
2020-05-13 10:42:27 +08:00
c57db57dfd Mod: use PIL to make Chinese title availble 2020-05-13 10:33:38 +08:00
baaf84226a add video stream fps log 2020-04-19 21:23:02 +08:00
64d797efcb define class 'Face_register' and 'Face_recognizer' 2020-04-19 20:04:11 +08:00
3ca94d6d96 remove cvtColor 2020-04-02 17:41:45 +08:00
262609f91f use 'iloc' instead of 'ix' to fix the bug of pandas version 2020-02-27 07:51:15 +08:00
bd5a4034f4 remove landmarks_5 dat 2019-12-21 01:13:40 +08:00
a0a070ce3a Delete useless code 2019-11-20 00:26:31 +08:00
13b6807441 add the info for "cap.set" 2019-11-19 15:00:55 +08:00
fd13a6b7ec Update get_faces_from_camera.py 2019-11-19 14:56:42 +08:00
c80b1d19b8 Update get_faces_from_camera.py 2019-11-19 14:41:28 +08:00
7e85411ae3 update readme
please install some python packages if needed
2019-08-28 10:36:58 +08:00
08deb0d608 Intro to algorithm and the way to customize names
1. Breif introduction of face reco algorithm: ResNet;
2. Add the patch to customize names instead of "Person 1", "Person 2"...
2019-04-26 15:41:35 +08:00
34 changed files with 933 additions and 551 deletions

View File

@ -4,7 +4,7 @@
<content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$/data" isTestSource="false" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="jdk" jdkName="Python 3.7 (2)" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="TestRunnerService">

View File

@ -12,6 +12,7 @@
<option name="ignoredErrors">
<list>
<option value="N806" />
<option value="N802" />
</list>
</option>
</inspection_tool>

2
.idea/misc.xml generated
View File

@ -1,4 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.6" project-jdk-type="Python SDK" />
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.7 (2)" project-jdk-type="Python SDK" />
</project>

296
.idea/workspace.xml generated
View File

@ -3,23 +3,7 @@
<component name="ChangeListManager">
<list default="true" id="e58b655a-3a9b-4001-b4da-39e07ab46629" name="Default Changelist" comment="">
<change beforePath="$PROJECT_DIR$/.idea/workspace.xml" beforeDir="false" afterPath="$PROJECT_DIR$/.idea/workspace.xml" afterDir="false" />
<change beforePath="$PROJECT_DIR$/README.rst" beforeDir="false" afterPath="$PROJECT_DIR$/README.rst" afterDir="false" />
<change beforePath="$PROJECT_DIR$/data/data_dlib/dlib_face_recognition_resnet_model_v1.dat" beforeDir="false" afterPath="$PROJECT_DIR$/data/data_dlib/dlib_face_recognition_resnet_model_v1.dat" afterDir="false" />
<change beforePath="$PROJECT_DIR$/data/data_dlib/shape_predictor_5_face_landmarks.dat" beforeDir="false" afterPath="$PROJECT_DIR$/data/data_dlib/shape_predictor_5_face_landmarks.dat" afterDir="false" />
<change beforePath="$PROJECT_DIR$/data/data_dlib/shape_predictor_68_face_landmarks.dat" beforeDir="false" afterPath="$PROJECT_DIR$/data/data_dlib/shape_predictor_68_face_landmarks.dat" afterDir="false" />
<change beforePath="$PROJECT_DIR$/face_reco_from_camera.py" beforeDir="false" afterPath="$PROJECT_DIR$/face_reco_from_camera.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/features_extraction_to_csv.py" beforeDir="false" afterPath="$PROJECT_DIR$/features_extraction_to_csv.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/get_faces_from_camera.py" beforeDir="false" afterPath="$PROJECT_DIR$/get_faces_from_camera.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/how_to_use_camera.py" beforeDir="false" afterPath="$PROJECT_DIR$/how_to_use_camera.py" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/Dlib_Face_recognition_by_coneypo.pptx" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/Dlib_Face_recognition_by_coneypo.pptx" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/face_reco_single_person.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/face_reco_single_person.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/face_reco_single_person_customize_name.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/face_reco_single_person_customize_name.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/face_reco_two_people.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/face_reco_two_people.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/face_reco_two_people_in_database.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/face_reco_two_people_in_database.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/get_face_from_camera.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/get_face_from_camera.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/get_face_from_camera_out_of_range.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/get_face_from_camera_out_of_range.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/introduction/overview.png" beforeDir="false" afterPath="$PROJECT_DIR$/introduction/overview.png" afterDir="false" />
<change beforePath="$PROJECT_DIR$/requirements.txt" beforeDir="false" afterPath="$PROJECT_DIR$/requirements.txt" afterDir="false" />
<change beforePath="$PROJECT_DIR$/features_extraction_to_mysql.py" beforeDir="false" afterPath="$PROJECT_DIR$/features_extraction_to_mysql.py" afterDir="false" />
</list>
<option name="EXCLUDED_CONVERTED_TO_IGNORED" value="true" />
<option name="SHOW_DIALOG" value="false" />
@ -27,45 +11,6 @@
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="FileEditorManager">
<leaf SIDE_TABS_SIZE_LIMIT_KEY="300">
<file pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/README.rst">
<provider selected="true" editor-type-id="restructured-text-editor" />
</entry>
</file>
<file pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/get_faces_from_camera.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="309">
<caret line="69" selection-start-line="69" selection-end-line="69" />
</state>
</provider>
</entry>
</file>
<file pinned="false" current-in-tab="false">
<entry file="file://$PROJECT_DIR$/features_extraction_to_csv.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="248">
<caret line="73" column="36" selection-start-line="73" selection-start-column="36" selection-end-line="73" selection-end-column="36" />
</state>
</provider>
</entry>
</file>
<file pinned="false" current-in-tab="true">
<entry file="file://$PROJECT_DIR$/face_reco_from_camera.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="489">
<caret line="160" column="23" lean-forward="true" selection-start-line="160" selection-start-column="23" selection-end-line="160" selection-end-column="23" />
<folding>
<element signature="e#230#264#0" expanded="true" />
</folding>
</state>
</provider>
</entry>
</file>
</leaf>
</component>
<component name="FileTemplateManagerImpl">
<option name="RECENT_TEMPLATES">
<list>
@ -73,80 +18,20 @@
</list>
</option>
</component>
<component name="FindInProjectRecents">
<findStrings>
<find>path_photos_from_camera</find>
<find>path_csv_from_photos</find>
<find>facerec</find>
<find>img</find>
<find>feature_mean_list_personX</find>
<find>feature_list_personX</find>
<find>feature</find>
<find>features_list_personX</find>
<find>feature_mean_personX</find>
<find>data_csvs</find>
<find>features_known_arr</find>
<find>with</find>
</findStrings>
<replaceStrings>
<replace>face_rec</replace>
<replace>img_rd</replace>
<replace>descriptor_mean_list_personX</replace>
<replace>features_list_personX</replace>
<replace>features_mean_personX</replace>
</replaceStrings>
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" />
</component>
<component name="IdeDocumentHistory">
<option name="CHANGED_PATHS">
<list>
<option value="$PROJECT_DIR$/requirements.txt" />
<option value="$PROJECT_DIR$/get_features_into_CSV.py" />
<option value="$PROJECT_DIR$/get_origin.py" />
<option value="$PROJECT_DIR$/test.py" />
<option value="$PROJECT_DIR$/features_extraction_to_csv.py" />
<option value="$PROJECT_DIR$/get_faces_from_camera.py" />
<option value="$PROJECT_DIR$/README.rst" />
<option value="$PROJECT_DIR$/face_reco_from_camera.py" />
</list>
</option>
</component>
<component name="ProjectFrameBounds" extendedState="6">
<option name="x" value="-281" />
<option name="y" value="574" />
<option name="width" value="1910" />
<option name="height" value="741" />
</component>
<component name="ProjectId" id="1Tq7xXTTl7R3HeMqxP7UMMKZMeC" />
<component name="ProjectLevelVcsManager" settingsEditedManually="true" />
<component name="ProjectView">
<navigator proportions="" version="1">
<foldersAlwaysOnTop value="true" />
</navigator>
<panes>
<pane id="ProjectPane">
<subPane>
<expand>
<path>
<item name="Dlib_face_recognition_from_camera" type="b2602c69:ProjectViewProjectNode" />
<item name="Dlib_face_recognition_from_camera" type="462c0819:PsiDirectoryNode" />
</path>
<path>
<item name="Dlib_face_recognition_from_camera" type="b2602c69:ProjectViewProjectNode" />
<item name="Dlib_face_recognition_from_camera" type="462c0819:PsiDirectoryNode" />
<item name="data" type="462c0819:PsiDirectoryNode" />
</path>
</expand>
<select />
</subPane>
</pane>
<pane id="Scope" />
</panes>
<component name="ProjectViewState">
<option name="hideEmptyMiddlePackages" value="true" />
<option name="showExcludedFiles" value="true" />
<option name="showLibraryContents" value="true" />
</component>
<component name="PropertiesComponent">
<property name="SHARE_PROJECT_CONFIGURATION_FILES" value="true" />
<property name="last_opened_file_path" value="/media/con/Ubuntu 18.0/Face_Recognition" />
<property name="last_opened_file_path" value="$PROJECT_DIR$/../Django_MySQL_Table" />
<property name="settings.editor.selected.configurable" value="com.jetbrains.python.configuration.PyActiveSdkModuleConfigurable" />
</component>
<component name="RunDashboard">
<option name="ruleStates">
@ -160,8 +45,8 @@
</list>
</option>
</component>
<component name="RunManager" selected="Python.face_reco_from_camera">
<configuration name="face_reco_from_camera" type="PythonConfigurationType" factoryName="Python" temporary="true">
<component name="RunManager" selected="Python.features_extraction_to_mysql">
<configuration name="face_reco_from_camera_mysql" type="PythonConfigurationType" factoryName="Python" temporary="true">
<module name="Dlib_face_recognition_from_camera" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
@ -173,7 +58,7 @@
<option name="IS_MODULE_SDK" value="true" />
<option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/face_reco_from_camera.py" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/face_reco_from_camera_mysql.py" />
<option name="PARAMETERS" value="" />
<option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" />
@ -182,7 +67,7 @@
<option name="INPUT_FILE" value="" />
<method v="2" />
</configuration>
<configuration name="features_extraction_to_csv" type="PythonConfigurationType" factoryName="Python" temporary="true">
<configuration name="features_extraction_to_mysql" type="PythonConfigurationType" factoryName="Python" temporary="true">
<module name="Dlib_face_recognition_from_camera" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
@ -194,7 +79,7 @@
<option name="IS_MODULE_SDK" value="true" />
<option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/features_extraction_to_csv.py" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/features_extraction_to_mysql.py" />
<option name="PARAMETERS" value="" />
<option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" />
@ -224,7 +109,7 @@
<option name="INPUT_FILE" value="" />
<method v="2" />
</configuration>
<configuration name="get_origin" type="PythonConfigurationType" factoryName="Python" temporary="true">
<configuration name="mysql_insert" type="PythonConfigurationType" factoryName="Python" temporary="true">
<module name="Dlib_face_recognition_from_camera" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
@ -236,7 +121,7 @@
<option name="IS_MODULE_SDK" value="true" />
<option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/get_origin.py" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/mysql_insert.py" />
<option name="PARAMETERS" value="" />
<option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" />
@ -245,7 +130,7 @@
<option name="INPUT_FILE" value="" />
<method v="2" />
</configuration>
<configuration name="test" type="PythonConfigurationType" factoryName="Python" temporary="true">
<configuration name="read_mysql" type="PythonConfigurationType" factoryName="Python" temporary="true">
<module name="Dlib_face_recognition_from_camera" />
<option name="INTERPRETER_OPTIONS" value="" />
<option name="PARENT_ENVS" value="true" />
@ -257,7 +142,7 @@
<option name="IS_MODULE_SDK" value="true" />
<option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/test.py" />
<option name="SCRIPT_NAME" value="$PROJECT_DIR$/read_mysql.py" />
<option name="PARAMETERS" value="" />
<option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" />
@ -267,19 +152,19 @@
<method v="2" />
</configuration>
<list>
<item itemvalue="Python.face_reco_from_camera" />
<item itemvalue="Python.features_extraction_to_csv" />
<item itemvalue="Python.mysql_insert" />
<item itemvalue="Python.features_extraction_to_mysql" />
<item itemvalue="Python.face_reco_from_camera_mysql" />
<item itemvalue="Python.read_mysql" />
<item itemvalue="Python.get_faces_from_camera" />
<item itemvalue="Python.get_origin" />
<item itemvalue="Python.test" />
</list>
<recent_temporary>
<list>
<item itemvalue="Python.face_reco_from_camera" />
<item itemvalue="Python.features_extraction_to_csv" />
<item itemvalue="Python.features_extraction_to_mysql" />
<item itemvalue="Python.read_mysql" />
<item itemvalue="Python.face_reco_from_camera_mysql" />
<item itemvalue="Python.get_faces_from_camera" />
<item itemvalue="Python.test" />
<item itemvalue="Python.get_origin" />
<item itemvalue="Python.mysql_insert" />
</list>
</recent_temporary>
</component>
@ -296,28 +181,56 @@
</task>
<servers />
</component>
<component name="ToolWindowManager">
<frame x="0" y="27" width="1920" height="988" extended-state="6" />
<editor active="true" />
<layout>
<window_info active="true" content_ui="combo" id="Project" order="0" visible="true" weight="0.2090813" />
<window_info id="Structure" order="1" weight="0.25" />
<window_info id="Favorites" order="2" side_tool="true" />
<window_info anchor="bottom" id="Message" order="0" />
<window_info anchor="bottom" id="Find" order="1" />
<window_info anchor="bottom" id="Run" order="2" visible="true" weight="0.25686976" />
<window_info anchor="bottom" id="Debug" order="3" weight="0.39952996" />
<window_info anchor="bottom" id="Cvs" order="4" weight="0.25" />
<window_info anchor="bottom" id="Inspection" order="5" weight="0.4" />
<window_info anchor="bottom" id="TODO" order="6" />
<window_info anchor="bottom" id="Version Control" order="7" weight="0.32983682" />
<window_info anchor="bottom" id="Terminal" order="8" weight="0.28434888" />
<window_info anchor="bottom" id="Event Log" order="9" side_tool="true" />
<window_info anchor="bottom" id="Python Console" order="10" />
<window_info anchor="right" id="Commander" order="0" weight="0.4" />
<window_info anchor="right" id="Ant Build" order="1" weight="0.25" />
<window_info anchor="right" content_ui="combo" id="Hierarchy" order="2" weight="0.25" />
</layout>
<component name="Vcs.Log.Tabs.Properties">
<option name="TAB_STATES">
<map>
<entry key="MAIN">
<value>
<State>
<option name="COLUMN_ORDER" />
</State>
</value>
</entry>
</map>
</option>
</component>
<component name="WindowStateProjectService">
<state width="1897" height="194" key="GridCell.Tab.0.bottom" timestamp="1587297625581">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1897" height="194" key="GridCell.Tab.0.bottom/0.27.1920.993@0.27.1920.993" timestamp="1587297625581" />
<state width="1897" height="194" key="GridCell.Tab.0.center" timestamp="1587297625579">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1897" height="194" key="GridCell.Tab.0.center/0.27.1920.993@0.27.1920.993" timestamp="1587297625579" />
<state width="1897" height="194" key="GridCell.Tab.0.left" timestamp="1587297625578">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1897" height="194" key="GridCell.Tab.0.left/0.27.1920.993@0.27.1920.993" timestamp="1587297625578" />
<state width="1897" height="194" key="GridCell.Tab.0.right" timestamp="1587297625580">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1897" height="194" key="GridCell.Tab.0.right/0.27.1920.993@0.27.1920.993" timestamp="1587297625580" />
<state width="1485" height="299" key="GridCell.Tab.1.bottom" timestamp="1587263908422">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1485" height="299" key="GridCell.Tab.1.bottom/0.27.1920.993@0.27.1920.993" timestamp="1587263908422" />
<state width="1485" height="299" key="GridCell.Tab.1.center" timestamp="1587263908422">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1485" height="299" key="GridCell.Tab.1.center/0.27.1920.993@0.27.1920.993" timestamp="1587263908422" />
<state width="1485" height="299" key="GridCell.Tab.1.left" timestamp="1587263908422">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1485" height="299" key="GridCell.Tab.1.left/0.27.1920.993@0.27.1920.993" timestamp="1587263908422" />
<state width="1485" height="299" key="GridCell.Tab.1.right" timestamp="1587263908422">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state width="1485" height="299" key="GridCell.Tab.1.right/0.27.1920.993@0.27.1920.993" timestamp="1587263908422" />
<state x="759" y="251" width="672" height="678" key="search.everywhere.popup" timestamp="1587264669499">
<screen x="0" y="27" width="1920" height="993" />
</state>
<state x="759" y="251" width="672" height="678" key="search.everywhere.popup/0.27.1920.993@0.27.1920.993" timestamp="1587264669499" />
</component>
<component name="XDebuggerManager">
<breakpoint-manager>
@ -330,63 +243,4 @@
</default-breakpoints>
</breakpoint-manager>
</component>
<component name="editorHistoryManager">
<entry file="file://$PROJECT_DIR$/use_camera.py" />
<entry file="file://$PROJECT_DIR$/patch" />
<entry file="file://$PROJECT_DIR$/README.md" />
<entry file="file://$PROJECT_DIR$/data/data_csvs_from_camera/person_2.csv" />
<entry file="file://$PROJECT_DIR$/data/data_faces_from_camera/person_6/img_face_1.jpg" />
<entry file="file://$PROJECT_DIR$/introduction/face_reco_single_person_custmize_name.png" />
<entry file="file://$PROJECT_DIR$/data/data_csvs_from_camera/person_1.csv" />
<entry file="file://$PROJECT_DIR$/get_features_into_CSV.py" />
<entry file="file://$PROJECT_DIR$/get_origin.py" />
<entry file="file://$PROJECT_DIR$/test.py" />
<entry file="file://$PROJECT_DIR$/requirements.txt">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="36">
<caret line="2" column="16" selection-start-line="2" selection-start-column="16" selection-end-line="2" selection-end-column="16" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/introduction/face_reco_two_people_in_database.png">
<provider selected="true" editor-type-id="images" />
</entry>
<entry file="file://$PROJECT_DIR$/how_to_use_camera.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="486">
<caret line="27" column="13" selection-start-line="27" selection-start-column="13" selection-end-line="27" selection-end-column="13" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/data/features_all.csv">
<provider selected="true" editor-type-id="csv-text-editor" />
</entry>
<entry file="file://$PROJECT_DIR$/get_faces_from_camera.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="309">
<caret line="69" selection-start-line="69" selection-end-line="69" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/features_extraction_to_csv.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="248">
<caret line="73" column="36" selection-start-line="73" selection-start-column="36" selection-end-line="73" selection-end-column="36" />
</state>
</provider>
</entry>
<entry file="file://$PROJECT_DIR$/README.rst">
<provider selected="true" editor-type-id="restructured-text-editor" />
</entry>
<entry file="file://$PROJECT_DIR$/face_reco_from_camera.py">
<provider selected="true" editor-type-id="text-editor">
<state relative-caret-position="489">
<caret line="160" column="23" lean-forward="true" selection-start-line="160" selection-start-column="23" selection-end-line="160" selection-end-column="23" />
<folding>
<element signature="e#230#264#0" expanded="true" />
</folding>
</state>
</provider>
</entry>
</component>
</project>

View File

@ -15,7 +15,7 @@ Detect and recognize single/multi-faces from camera;
:align: center
请不要离摄像头过近,人脸超出摄像头范围时会有 "OUT OF RANGE" 提醒 /
Please do not too close to the camera, or you can't save faces with "OUT OF RANGE" warning;
Please do not be too close to the camera, or you can't save faces with "OUT OF RANGE" warning;
.. image:: introduction/get_face_from_camera_out_of_range.png
:align: center
@ -30,20 +30,31 @@ Detect and recognize single/multi-faces from camera;
当多张人脸 / When multi-faces:
一张已录入人脸 + 未录入 unknown 人脸 / 1x known face + 1x unknown face:
一张已录入人脸 + 未录入 unknown 人脸 / 1x known face + 2x unknown face:
.. image:: introduction/face_reco_two_people.png
.. image:: introduction/face_reco_multi_people.png
:align: center
同时识别多张已录入人脸 / multi-faces recognition at the same time:
同时识别多张已录入人脸 / Multi-faces recognition at the same time:
.. image:: introduction/face_reco_two_people_in_database.png
:align: center
实时人脸特征描述子计算 / Real-time face descriptor computation:
.. image:: introduction/face_descriptor_single_person.png
:align: center
** 关于精度 / About accuracy:
* When using a distance threshold of ``0.6``, the dlib model obtains an accuracy of ``99.38%`` on the standard LFW face recognition benchmark.
** 关于算法 / About algorithm
* 基于 Residual Neural Network / 残差网络的 CNN 模型;
* This model is a ResNet network with 29 conv layers. It's essentially a version of the ResNet-34 network from the paper Deep Residual Learning for Image Recognition by He, Zhang, Ren, and Sun with a few layers removed and the number of filters per layer reduced by half.
Overview
********
@ -55,19 +66,27 @@ Overview
Steps
*****
#. 下载源码 / Download zip from website or via GitHub Desktop in windows, or git clone in Ubuntu
#. 安装依赖库 / Install some python packages if needed
.. code-block:: bash
pip3 install opencv-python
pip3 install scikit-image
pip3 install dlib
#. 下载源码 / Download zip from website or via GitHub Desktop in windows, or git clone repo in Ubuntu
.. code-block:: bash
git clone https://github.com/coneypo/Dlib_face_recognition_from_camera
#. 进行 face register / 人脸信息采集录入
#. 进行人脸信息采集录入 / Register faces
.. code-block:: bash
python3 get_face_from_camera.py
#. 提取所有录入人脸数据存入 features_all.csv / Features extraction and save into features_all.csv
#. 提取所有录入人脸数据存入 "features_all.csv" / Features extraction and save into "features_all.csv"
.. code-block:: bash
@ -88,14 +107,14 @@ Repo 的 tree / 树状图:
::
.
├── get_faces_from_camera.py # Step1. Faces register
├── features_extraction_to_csv.py # Step2. Features extraction
├── face_reco_from_camera.py # Step3. Faces recognition
├── get_faces_from_camera.py # Step1. Face register
├── features_extraction_to_csv.py # Step2. Feature extraction
├── face_reco_from_camera.py # Step3. Face recognizer
├── face_descriptor_from_camera.py # Face descriptor computation
├── how_to_use_camera.py # Use the default camera by opencv
├── data
│   ├── data_dlib # Dlib's model
│   │   ├── dlib_face_recognition_resnet_model_v1.dat
│   │   ├── shape_predictor_5_face_landmarks.dat
│   │   └── shape_predictor_68_face_landmarks.dat
│   ├── data_faces_from_camera # Face images captured from camera (will generate after step 1)
│   │   ├── person_1
@ -115,7 +134,7 @@ Repo 的 tree / 树状图:
│   ├── get_face_from_camera.png
│   └── overview.png
├── README.rst
└── requirements.txt # Some python packages needed
└── requirements.txt # Some python packages needed
用到的 Dlib 相关模型函数:
@ -128,15 +147,22 @@ Repo 的 tree / 树状图:
faces = detector(img_gray, 0)
#. Dlib 人脸测器, output: <class 'dlib.dlib.full_object_detection'>
#. Dlib 人脸 landmark 特征点检测器, output: <class 'dlib.dlib.full_object_detection'>,
will use shape_predictor_68_face_landmarks.dat
.. code-block:: python
predictor = dlib.shape_predictor("data/data_dlib/shape_predictor_5_face_landmarks.dat")
# This is trained on the ibug 300-W dataset (https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/)
# Also note that this model file is designed for use with dlib's HOG face detector.
# That is, it expects the bounding boxes from the face detector to be aligned a certain way, the way dlib's HOG face detector does it.
# It won't work as well when used with a face detector that produces differently aligned boxes,
# such as the CNN based mmod_human_face_detector.dat face detector.
predictor = dlib.shape_predictor("data/data_dlib/shape_predictor_68_face_landmarks.dat")
shape = predictor(img_rd, faces[i])
#. 特征描述子 Face recognition model, the object maps human faces into 128D vectors
#. Dlib 特征描述子 Face recognition model, the object maps human faces into 128D vectors
.. code-block:: python
@ -169,7 +195,10 @@ Python 源码介绍如下:
* Compare the faces captured from camera with the faces you have registered which are saved in "features_all.csv"
* 将捕获到的人脸数据和之前存的人脸数据进行对比计算欧式距离, 由此判断是否是同一个人;
#. (optional) face_descriptor_from_camera.py
调用摄像头进行实时特征描述子计算; / Real-time face descriptor computation;
More
****
@ -184,6 +213,9 @@ Tips:
#. 人脸录入的时候先建文件夹再保存图片, 先 ``N````S`` / Press ``N`` before ``S``
#. 关于人脸识别卡顿 FPS 低问题, 不做 compare 的时候, 光跑 face_descriptor_from_camera.py 中 face_reco_model.compute_face_descriptor
在 CPU: i7-8700K FPS: 5~6, 所以主要提取特征时候耗资源
可以访问我的博客获取本项目的更详细介绍,如有问题可以邮件联系我 /
For more details, please refer to my blog (in chinese) or mail to me :

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

View File

@ -0,0 +1,75 @@
# 摄像头实时人脸特征描述子计算 / Real-time face descriptor compute
import dlib # 人脸识别的库 Dlib
import cv2 # 图像处理的库 OpenCV
import time
# 1. Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# 2. Dlib 人脸 landmark 特征点检测器
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# 3. Dlib Resnet 人脸识别模型,提取 128D 的特征矢量
face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
class Face_Descriptor:
def __init__(self):
self.frame_time = 0
self.frame_start_time = 0
self.fps = 0
def update_fps(self):
now = time.time()
self.frame_time = now - self.frame_start_time
self.fps = 1.0 / self.frame_time
self.frame_start_time = now
def run(self):
cap = cv2.VideoCapture(0)
cap.set(3, 480)
self.process(cap)
cap.release()
cv2.destroyAllWindows()
def process(self, stream):
while stream.isOpened():
flag, img_rd = stream.read()
k = cv2.waitKey(1)
faces = detector(img_rd, 0)
font = cv2.FONT_HERSHEY_SIMPLEX
# 检测到人脸
if len(faces) != 0:
for face in faces:
face_shape = predictor(img_rd, face)
face_desc = face_reco_model.compute_face_descriptor(img_rd, face_shape)
# 添加说明
cv2.putText(img_rd, "Face Descriptor", (20, 40), font, 1, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "FPS: " + str(self.fps.__round__(2)), (20, 100), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Faces: " + str(len(faces)), (20, 140), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "S: Save current face", (20, 400), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Q: Quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
# 按下 'q' 键退出
if k == ord('q'):
break
self.update_fps()
cv2.namedWindow("camera", 1)
cv2.imshow("camera", img_rd)
def main():
Face_Descriptor_con = Face_Descriptor()
Face_Descriptor_con.run()
if __name__ == '__main__':
main()

View File

@ -6,156 +6,217 @@
# GitHub: https://github.com/coneypo/Dlib_face_recognition_from_camera
# Created at 2018-05-11
# Updated at 2019-04-09
# Updated at 2020-05-29
import dlib # 人脸处理的库 Dlib
import numpy as np # 数据处理的库 numpy
import cv2 # 图像处理的库 OpenCv
import pandas as pd # 数据处理的库 Pandas
import dlib # 人脸处理的库 Dlib
import numpy as np # 数据处理的库 Numpy
import cv2 # 图像处理的库 OpenCV
import pandas as pd # 数据处理的库 Pandas
import os
import time
from PIL import Image, ImageDraw, ImageFont
# 人脸识别模型,提取128D的特征矢量
# face recognition model, the object maps human faces into 128D vectors
# Refer this tutorial: http://dlib.net/python/index.html#dlib.face_recognition_model_v1
facerec = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
# 计算两个128D向量间的欧式距离
# compute the e-distance between two 128D features
def return_euclidean_distance(feature_1, feature_2):
feature_1 = np.array(feature_1)
feature_2 = np.array(feature_2)
dist = np.sqrt(np.sum(np.square(feature_1 - feature_2)))
return dist
# 处理存放所有人脸特征的 csv
path_features_known_csv = "data/features_all.csv"
csv_rd = pd.read_csv(path_features_known_csv, header=None)
# 用来存放所有录入人脸特征的数组
# the array to save the features of faces in the database
features_known_arr = []
# 读取已知人脸数据
# print known faces
for i in range(csv_rd.shape[0]):
features_someone_arr = []
for j in range(0, len(csv_rd.ix[i, :])):
features_someone_arr.append(csv_rd.ix[i, :][j])
features_known_arr.append(features_someone_arr)
print("Faces in Database:", len(features_known_arr))
# Dlib 检测器和预测器
# The detector and predictor will be used
# 1. Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# 2. Dlib 人脸 landmark 特征点检测器
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# 创建 cv2 摄像头对象
# cv2.VideoCapture(0) to use the default camera of PC,
# and you can use local video name by use cv2.VideoCapture(filename)
cap = cv2.VideoCapture(0)
# 3. Dlib Resnet 人脸识别模型,提取 128D 的特征矢量
face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
# cap.set(propId, value)
# 设置视频参数,propId 设置的视频参数,value 设置的参数值
cap.set(3, 480)
# cap.isOpened() 返回 true/false 检查初始化是否成功
# when the camera is open
while cap.isOpened():
class Face_Recognizer:
def __init__(self):
# 用来存放所有录入人脸特征的数组 / Save the features of faces in the database
self.features_known_list = []
flag, img_rd = cap.read()
kk = cv2.waitKey(1)
# 存储录入人脸名字 / Save the name of faces known
self.name_known_cnt = 0
self.name_known_list = []
# 取灰度
img_gray = cv2.cvtColor(img_rd, cv2.COLOR_RGB2GRAY)
# 存储当前摄像头中捕获到的所有人脸的坐标名字 / Save the positions and names of current faces captured
self.pos_camera_list = []
self.name_camera_list = []
# 存储当前摄像头中捕获到的人脸数
self.faces_cnt = 0
# 存储当前摄像头中捕获到的人脸特征
self.features_camera_list = []
# 人脸数 faces
faces = detector(img_gray, 0)
# Update FPS
self.fps = 0
self.frame_start_time = 0
# 待会要写的字体 font to write later
font = cv2.FONT_HERSHEY_COMPLEX
# 存储当前摄像头中捕获到的所有人脸的坐标/名字
# the list to save the positions and names of current faces captured
pos_namelist = []
name_namelist = []
# 按下 q 键退出
# press 'q' to exit
if kk == ord('q'):
break
else:
# 检测到人脸 when face detected
if len(faces) != 0:
# 获取当前捕获到的图像的所有人脸的特征,存储到 features_cap_arr
# get the features captured and save into features_cap_arr
features_cap_arr = []
for i in range(len(faces)):
shape = predictor(img_rd, faces[i])
features_cap_arr.append(facerec.compute_face_descriptor(img_rd, shape))
# 遍历捕获到的图像中所有的人脸
# traversal all the faces in the database
for k in range(len(faces)):
print("##### camera person", k+1, "#####")
# 让人名跟随在矩形框的下方
# 确定人名的位置坐标
# 先默认所有人不认识,是 unknown
# set the default names of faces with "unknown"
name_namelist.append("unknown")
# 每个捕获人脸的名字坐标 the positions of faces captured
pos_namelist.append(tuple([faces[k].left(), int(faces[k].bottom() + (faces[k].bottom() - faces[k].top())/4)]))
# 对于某张人脸,遍历所有存储的人脸特征
# for every faces detected, compare the faces in the database
e_distance_list = []
for i in range(len(features_known_arr)):
# 如果 person_X 数据不为空
if str(features_known_arr[i][0]) != '0.0':
print("with person", str(i + 1), "the e distance: ", end='')
e_distance_tmp = return_euclidean_distance(features_cap_arr[k], features_known_arr[i])
print(e_distance_tmp)
e_distance_list.append(e_distance_tmp)
# 从 "features_all.csv" 读取录入人脸特征
def get_face_database(self):
if os.path.exists("data/features_all.csv"):
path_features_known_csv = "data/features_all.csv"
csv_rd = pd.read_csv(path_features_known_csv, header=None)
# 2. 读取已知人脸数据 / Print known faces
for i in range(csv_rd.shape[0]):
features_someone_arr = []
for j in range(0, 128):
if csv_rd.iloc[i][j] == '':
features_someone_arr.append('0')
else:
# 空数据 person_X
e_distance_list.append(999999999)
# Find the one with minimum e distance
similar_person_num = e_distance_list.index(min(e_distance_list))
print("Minimum e distance with person", int(similar_person_num)+1)
features_someone_arr.append(csv_rd.iloc[i][j])
self.features_known_list.append(features_someone_arr)
self.name_known_list.append("Person_"+str(i+1))
self.name_known_cnt = len(self.name_known_list)
print("Faces in Database:", len(self.features_known_list))
return 1
else:
print('##### Warning #####', '\n')
print("'features_all.csv' not found!")
print(
"Please run 'get_faces_from_camera.py' and 'features_extraction_to_csv.py' before 'face_reco_from_camera.py'",
'\n')
print('##### End Warning #####')
return 0
if min(e_distance_list) < 0.4:
# 在这里修改 person_1, person_2 ... 的名字
# 可以在这里改称 Jack, Tom and others
# Here you can modify the names shown on the camera
name_namelist[k] = "Person "+str(int(similar_person_num)+1)
print("May be person "+str(int(similar_person_num)+1))
# 计算两个128D向量间的欧式距离 / Compute the e-distance between two 128D features
@staticmethod
def return_euclidean_distance(feature_1, feature_2):
feature_1 = np.array(feature_1)
feature_2 = np.array(feature_2)
dist = np.sqrt(np.sum(np.square(feature_1 - feature_2)))
return dist
# 更新 FPS / Update FPS of Video stream
def update_fps(self):
now = time.time()
self.frame_time = now - self.frame_start_time
self.fps = 1.0 / self.frame_time
self.frame_start_time = now
def draw_note(self, img_rd):
font = cv2.FONT_ITALIC
cv2.putText(img_rd, "Face Recognizer", (20, 40), font, 1, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "FPS: " + str(self.fps.__round__(2)), (20, 100), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Faces: " + str(self.faces_cnt), (20, 140), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Q: Quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
def draw_name(self, img_rd):
# 在人脸框下面写人脸名字 / Write names under rectangle
font = ImageFont.truetype("simsun.ttc", 30)
img = Image.fromarray(cv2.cvtColor(img_rd, cv2.COLOR_BGR2RGB))
draw = ImageDraw.Draw(img)
for i in range(self.faces_cnt):
# cv2.putText(img_rd, self.name_camera_list[i], self.pos_camera_list[i], font, 0.8, (0, 255, 255), 1, cv2.LINE_AA)
draw.text(xy=self.pos_camera_list[i], text=self.name_camera_list[i], font=font)
img_with_name = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
return img_with_name
# 修改显示人名
def modify_name_camera_list(self):
# Default known name: person_1, person_2, person_3
self.name_known_list[0] ='张三'.encode('utf-8').decode()
self.name_known_list[1] ='李四'.encode('utf-8').decode()
# self.name_known_list[2] ='xx'.encode('utf-8').decode()
# self.name_known_list[3] ='xx'.encode('utf-8').decode()
# self.name_known_list[4] ='xx'.encode('utf-8').decode()
# 处理获取的视频流,进行人脸识别 / Input video stream and face reco process
def process(self, stream):
# 1. 读取存放所有人脸特征的 csv
if self.get_face_database():
while stream.isOpened():
flag, img_rd = stream.read()
faces = detector(img_rd, 0)
kk = cv2.waitKey(1)
# 按下 q 键退出 / Press 'q' to quit
if kk == ord('q'):
break
else:
print("Unknown person")
self.draw_note(img_rd)
self.features_camera_list = []
self.faces_cnt = 0
self.pos_camera_list = []
self.name_camera_list = []
# 矩形框
# draw rectangle
for kk, d in enumerate(faces):
# 绘制矩形框
cv2.rectangle(img_rd, tuple([d.left(), d.top()]), tuple([d.right(), d.bottom()]), (0, 255, 255), 2)
print('\n')
# 2. 检测到人脸 / when face detected
if len(faces) != 0:
# 3. 获取当前捕获到的图像的所有人脸的特征,存储到 self.features_camera_list
# 3. Get the features captured and save into self.features_camera_list
for i in range(len(faces)):
shape = predictor(img_rd, faces[i])
self.features_camera_list.append(face_reco_model.compute_face_descriptor(img_rd, shape))
# 在人脸框下面写人脸名字
# write names under rectangle
for i in range(len(faces)):
cv2.putText(img_rd, name_namelist[i], pos_namelist[i], font, 0.8, (0, 255, 255), 1, cv2.LINE_AA)
# 4. 遍历捕获到的图像中所有的人脸 / Traversal all the faces in the database
for k in range(len(faces)):
print("##### camera person", k + 1, "#####")
# 让人名跟随在矩形框的下方
# 确定人名的位置坐标
# 先默认所有人不认识,是 unknown
# Set the default names of faces with "unknown"
self.name_camera_list.append("unknown")
print("Faces in camera now:", name_namelist, "\n")
# 每个捕获人脸的名字坐标 / Positions of faces captured
self.pos_camera_list.append(tuple(
[faces[k].left(), int(faces[k].bottom() + (faces[k].bottom() - faces[k].top()) / 4)]))
cv2.putText(img_rd, "Press 'q': Quit", (20, 450), font, 0.8, (84, 255, 159), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Face Recognition", (20, 40), font, 1, (0, 0, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Faces: " + str(len(faces)), (20, 100), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
# 5. 对于某张人脸,遍历所有存储的人脸特征
# For every faces detected, compare the faces in the database
e_distance_list = []
for i in range(len(self.features_known_list)):
# 如果 person_X 数据不为空
if str(self.features_known_list[i][0]) != '0.0':
print("with person", str(i + 1), "the e distance: ", end='')
e_distance_tmp = self.return_euclidean_distance(self.features_camera_list[k],
self.features_known_list[i])
print(e_distance_tmp)
e_distance_list.append(e_distance_tmp)
else:
# 空数据 person_X
e_distance_list.append(999999999)
# 6. 寻找出最小的欧式距离匹配 / Find the one with minimum e distance
similar_person_num = e_distance_list.index(min(e_distance_list))
print("Minimum e distance with person", self.name_known_list[similar_person_num])
# 窗口显示 show with opencv
cv2.imshow("camera", img_rd)
if min(e_distance_list) < 0.4:
self.name_camera_list[k] = self.name_known_list[similar_person_num]
print("May be person " + str(self.name_known_list[similar_person_num]))
else:
print("Unknown person")
# 释放摄像头 release camera
cap.release()
# 矩形框 / Draw rectangle
for kk, d in enumerate(faces):
# 绘制矩形框
cv2.rectangle(img_rd, tuple([d.left(), d.top()]), tuple([d.right(), d.bottom()]),
(0, 255, 255), 2)
print('\n')
# 删除建立的窗口 delete all the windows
cv2.destroyAllWindows()
self.faces_cnt = len(faces)
# 7. 在这里更改显示的人名 / Modify name if needed
self.modify_name_camera_list()
# 8. 写名字 / Draw name
# self.draw_name(img_rd)
img_with_name = self.draw_name(img_rd)
else:
img_with_name = img_rd
print("Faces in camera now:", self.name_camera_list, "\n")
cv2.imshow("camera", img_with_name)
# 9. 更新 FPS / Update stream FPS
self.update_fps()
# OpenCV 调用摄像头并进行 process
def run(self):
cap = cv2.VideoCapture(0)
cap.set(3, 480)
self.process(cap)
cap.release()
cv2.destroyAllWindows()
def main():
Face_Recognizer_con = Face_Recognizer()
Face_Recognizer_con.run()
if __name__ == '__main__':
main()

221
face_reco_from_camera_mysql.py Executable file
View File

@ -0,0 +1,221 @@
# 摄像头实时人脸识别
# Real-time face recognition
# Author: coneypo
# Blog: http://www.cnblogs.com/AdaminXie
# GitHub: https://github.com/coneypo/Dlib_face_recognition_from_camera
# Created at 2018-05-11
# Updated at 2020-05-29
import dlib # 人脸处理的库 Dlib
import numpy as np # 数据处理的库 Numpy
import cv2 # 图像处理的库 OpenCV
import pandas as pd # 数据处理的库 Pandas
import os
import time
from PIL import Image, ImageDraw, ImageFont
import pymysql
db = pymysql.connect("localhost", "root", "intel@123", "dlib_database")
cursor = db.cursor()
# 1. Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# 2. Dlib 人脸 landmark 特征点检测器
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# 3. Dlib Resnet 人脸识别模型,提取 128D 的特征矢量
face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
class Face_Recognizer:
def __init__(self):
# 用来存放所有录入人脸特征的数组 / Save the features of faces in the database
self.features_known_list = []
# 存储录入人脸名字 / Save the name of faces known
self.name_known_cnt = 0
self.name_known_list = []
# 存储当前摄像头中捕获到的所有人脸的坐标名字 / Save the positions and names of current faces captured
self.pos_camera_list = []
self.name_camera_list = []
# 存储当前摄像头中捕获到的人脸数
self.faces_cnt = 0
# 存储当前摄像头中捕获到的人脸特征
self.features_camera_list = []
# Update FPS
self.fps = 0
self.frame_start_time = 0
# 从 "features_all.csv" 读取录入人脸特征
def get_face_database(self):
# 1. get database face numbers
cmd_rd = "select count(*) from dlib_face_table;"
cursor.execute(cmd_rd)
results = cursor.fetchall()
person_cnt = int(results[0][0])
# 2. get features for person X
for person in range(person_cnt):
# lookup for personX
cmd_lookup = "select * from dlib_face_table where person_x=\"person_" + str(person + 1) + "\";"
cursor.execute(cmd_lookup)
results = cursor.fetchall()
results = list(results[0][1:])
self.features_known_list.append(results)
self.name_known_list.append("Person_" + str(person + 1))
print(results)
self.name_known_cnt = len(self.name_known_list)
print("Faces in Database:", len(self.features_known_list))
return 1
# 计算两个128D向量间的欧式距离 / Compute the e-distance between two 128D features
@staticmethod
def return_euclidean_distance(feature_1, feature_2):
feature_1 = np.array(feature_1)
feature_2 = np.array(feature_2)
dist = np.sqrt(np.sum(np.square(feature_1 - feature_2)))
return dist
# 更新 FPS / Update FPS of Video stream
def update_fps(self):
now = time.time()
self.frame_time = now - self.frame_start_time
self.fps = 1.0 / self.frame_time
self.frame_start_time = now
def draw_note(self, img_rd):
font = cv2.FONT_ITALIC
cv2.putText(img_rd, "Face Recognizer", (20, 40), font, 1, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "FPS: " + str(self.fps.__round__(2)), (20, 100), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Faces: " + str(self.faces_cnt), (20, 140), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Q: Quit", (20, 450), font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
def draw_name(self, img_rd):
# 在人脸框下面写人脸名字 / Write names under rectangle
font = ImageFont.truetype("simsun.ttc", 30)
img = Image.fromarray(cv2.cvtColor(img_rd, cv2.COLOR_BGR2RGB))
draw = ImageDraw.Draw(img)
for i in range(self.faces_cnt):
# cv2.putText(img_rd, self.name_camera_list[i], self.pos_camera_list[i], font, 0.8, (0, 255, 255), 1, cv2.LINE_AA)
draw.text(xy=self.pos_camera_list[i], text=self.name_camera_list[i], font=font)
img_with_name = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
return img_with_name
# 修改显示人名
def modify_name_camera_list(self):
# Default known name: person_1, person_2, person_3
self.name_known_list[0] ='张三'.encode('utf-8').decode()
self.name_known_list[1] ='李四'.encode('utf-8').decode()
# self.name_known_list[2] ='xx'.encode('utf-8').decode()
# self.name_known_list[3] ='xx'.encode('utf-8').decode()
# self.name_known_list[4] ='xx'.encode('utf-8').decode()
# 处理获取的视频流,进行人脸识别 / Input video stream and face reco process
def process(self, stream):
# 1. 读取存放所有人脸特征的 csv
if self.get_face_database():
while stream.isOpened():
flag, img_rd = stream.read()
faces = detector(img_rd, 0)
kk = cv2.waitKey(1)
# 按下 q 键退出 / Press 'q' to quit
if kk == ord('q'):
break
else:
self.draw_note(img_rd)
self.features_camera_list = []
self.faces_cnt = 0
self.pos_camera_list = []
self.name_camera_list = []
# 2. 检测到人脸 / when face detected
if len(faces) != 0:
# 3. 获取当前捕获到的图像的所有人脸的特征,存储到 self.features_camera_list
# 3. Get the features captured and save into self.features_camera_list
for i in range(len(faces)):
shape = predictor(img_rd, faces[i])
self.features_camera_list.append(face_reco_model.compute_face_descriptor(img_rd, shape))
# 4. 遍历捕获到的图像中所有的人脸 / Traversal all the faces in the database
for k in range(len(faces)):
print("##### camera person", k + 1, "#####")
# 让人名跟随在矩形框的下方
# 确定人名的位置坐标
# 先默认所有人不认识,是 unknown
# Set the default names of faces with "unknown"
self.name_camera_list.append("unknown")
# 每个捕获人脸的名字坐标 / Positions of faces captured
self.pos_camera_list.append(tuple(
[faces[k].left(), int(faces[k].bottom() + (faces[k].bottom() - faces[k].top()) / 4)]))
# 5. 对于某张人脸,遍历所有存储的人脸特征
# For every faces detected, compare the faces in the database
e_distance_list = []
for i in range(len(self.features_known_list)):
# 如果 person_X 数据不为空
if str(self.features_known_list[i][0]) != '0.0':
print("with person", str(i + 1), "the e distance: ", end='')
e_distance_tmp = self.return_euclidean_distance(self.features_camera_list[k],
self.features_known_list[i])
print(e_distance_tmp)
e_distance_list.append(e_distance_tmp)
else:
# 空数据 person_X
e_distance_list.append(999999999)
# 6. 寻找出最小的欧式距离匹配 / Find the one with minimum e distance
similar_person_num = e_distance_list.index(min(e_distance_list))
print("Minimum e distance with person", self.name_known_list[similar_person_num])
if min(e_distance_list) < 0.4:
self.name_camera_list[k] = self.name_known_list[similar_person_num]
print("May be person " + str(self.name_known_list[similar_person_num]))
else:
print("Unknown person")
# 矩形框 / Draw rectangle
for kk, d in enumerate(faces):
# 绘制矩形框
cv2.rectangle(img_rd, tuple([d.left(), d.top()]), tuple([d.right(), d.bottom()]),
(0, 255, 255), 2)
print('\n')
self.faces_cnt = len(faces)
# 7. 在这里更改显示的人名 / Modify name if needed
# self.modify_name_camera_list()
# 8. 写名字 / Draw name
# self.draw_name(img_rd)
img_with_name = self.draw_name(img_rd)
else:
img_with_name = img_rd
print("Faces in camera now:", self.name_camera_list, "\n")
cv2.imshow("camera", img_with_name)
# 9. 更新 FPS / Update stream FPS
self.update_fps()
# OpenCV 调用摄像头并进行 process
def run(self):
cap = cv2.VideoCapture(0)
cap.set(3, 480)
self.process(cap)
cap.release()
cv2.destroyAllWindows()
def main():
Face_Recognizer_con = Face_Recognizer()
Face_Recognizer_con.run()
if __name__ == '__main__':
main()

View File

@ -7,9 +7,8 @@
# Mail: coneypo@foxmail.com
# Created at 2018-05-11
# Updated at 2019-04-04
# Updated at 2020-04-02
import cv2
import os
import dlib
from skimage import io
@ -19,30 +18,28 @@ import numpy as np
# 要读取人脸图像文件的路径
path_images_from_camera = "data/data_faces_from_camera/"
# Dlib 正向人脸检测器
# 1. Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# Dlib 人脸测器
predictor = dlib.shape_predictor("data/data_dlib/shape_predictor_5_face_landmarks.dat")
# 2. Dlib 人脸 landmark 特征点检测器
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# Dlib 人脸识别模型
# Face recognition model, the object maps human faces into 128D vectors
face_rec = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
# 3. Dlib Resnet 人脸识别模型,提取 128D 的特征矢量
face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
# 返回单张图像的 128D 特征
def return_128d_features(path_img):
img_rd = io.imread(path_img)
img_gray = cv2.cvtColor(img_rd, cv2.COLOR_BGR2RGB)
faces = detector(img_gray, 1)
faces = detector(img_rd, 1)
print("%-40s %-20s" % ("检测到人脸的图像 / image with faces detected:", path_img), '\n')
print("%-40s %-20s" % ("检测到人脸的图像 / Image with faces detected:", path_img), '\n')
# 因为有可能截下来的人脸再去检测,检测不出来人脸了
# 所以要确保是 检测到人脸的人脸图像 拿去算特征
if len(faces) != 0:
shape = predictor(img_gray, faces[0])
face_descriptor = face_rec.compute_face_descriptor(img_gray, shape)
shape = predictor(img_rd, faces[0])
face_descriptor = face_reco_model.compute_face_descriptor(img_rd, shape)
else:
face_descriptor = 0
print("no face")
@ -57,7 +54,7 @@ def return_features_mean_personX(path_faces_personX):
if photos_list:
for i in range(len(photos_list)):
# 调用return_128d_features()得到128d特征
print("%-40s %-20s" % ("正在读的人脸图像 / image to read:", path_faces_personX + "/" + photos_list[i]))
print("%-40s %-20s" % ("正在读的人脸图像 / Image to read:", path_faces_personX + "/" + photos_list[i]))
features_128d = return_128d_features(path_faces_personX + "/" + photos_list[i])
# print(features_128d)
# 遇到没有检测出人脸的图片跳过
@ -73,7 +70,7 @@ def return_features_mean_personX(path_faces_personX):
if features_list_personX:
features_mean_personX = np.array(features_list_personX).mean(axis=0)
else:
features_mean_personX = '0'
features_mean_personX = np.zeros(128, dtype=int, order='C')
return features_mean_personX
@ -85,13 +82,13 @@ for person in person_list:
person_num_list.append(int(person.split('_')[-1]))
person_cnt = max(person_num_list)
with open("data/features_all.csv", "w", newline="") as csvfile:
writer = csv.writer(csvfile)
for person in range(person_cnt):
# Get the mean/average features of face/personX, it will be a list with a length of 128D
print(path_images_from_camera + "person_"+str(person+1))
features_mean_personX = return_features_mean_personX(path_images_from_camera + "person_"+str(person+1))
writer.writerow(features_mean_personX)
print("特征均值 / The mean of features:", list(features_mean_personX))
print('\n')
print("所有录入人脸数据存入 / Save all the features of faces registered into: data/features_all.csv")
for person in range(person_cnt):
# Get the mean/average features of face/personX, it will be a list with a length of 128D
print(path_images_from_camera + "person_" + str(person + 1))
features_mean_personX = return_features_mean_personX(path_images_from_camera + "person_" + str(person + 1))
print(features_mean_personX.shape)
print(features_mean_personX[0])
print("特征均值 / The mean of features:", list(features_mean_personX))
print('\n')

117
features_extraction_to_mysql.py Executable file
View File

@ -0,0 +1,117 @@
# 从人脸图像文件中提取人脸特征存入 CSV
# Features extraction from images and save into features_all.csv
# Author: coneypo
# Blog: http://www.cnblogs.com/AdaminXie
# GitHub: https://github.com/coneypo/Dlib_face_recognition_from_camera
# Mail: coneypo@foxmail.com
# Created at 2018-05-11
# Updated at 2020-04-02
import os
import dlib
from skimage import io
import numpy as np
import pymysql
db = pymysql.connect("localhost", "root", "intel@123", "dlib_database")
cursor = db.cursor()
# 要读取人脸图像文件的路径
path_images_from_camera = "data/data_faces_from_camera/"
# 1. Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# 2. Dlib 人脸 landmark 特征点检测器
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# 3. Dlib Resnet 人脸识别模型,提取 128D 的特征矢量
face_reco_model = dlib.face_recognition_model_v1("data/data_dlib/dlib_face_recognition_resnet_model_v1.dat")
# 返回单张图像的 128D 特征
def return_128d_features(path_img):
img_rd = io.imread(path_img)
faces = detector(img_rd, 1)
print("%-40s %-20s" % ("检测到人脸的图像 / Image with faces detected:", path_img), '\n')
# 因为有可能截下来的人脸再去检测,检测不出来人脸了
# 所以要确保是 检测到人脸的人脸图像 拿去算特征
if len(faces) != 0:
shape = predictor(img_rd, faces[0])
face_descriptor = face_reco_model.compute_face_descriptor(img_rd, shape)
else:
face_descriptor = 0
print("no face")
return face_descriptor
# 将文件夹中照片特征提取出来, 写入 CSV
def return_features_mean_personX(path_faces_personX):
features_list_personX = []
photos_list = os.listdir(path_faces_personX)
if photos_list:
for i in range(len(photos_list)):
# 调用return_128d_features()得到128d特征
print("%-40s %-20s" % ("正在读的人脸图像 / Image to read:", path_faces_personX + "/" + photos_list[i]))
features_128d = return_128d_features(path_faces_personX + "/" + photos_list[i])
# print(features_128d)
# 遇到没有检测出人脸的图片跳过
if features_128d == 0:
i += 1
else:
features_list_personX.append(features_128d)
else:
print("文件夹内图像文件为空 / Warning: No images in " + path_faces_personX + '/', '\n')
# 计算 128D 特征的均值
# personX 的 N 张图像 x 128D -> 1 x 128D
if features_list_personX:
features_mean_personX = np.array(features_list_personX).mean(axis=0)
else:
features_mean_personX = np.zeros(128, dtype=int, order='C')
return features_mean_personX
# 获取已录入的最后一个人脸序号 / get the num of latest person
person_list = os.listdir("data/data_faces_from_camera/")
person_num_list = []
for person in person_list:
person_num_list.append(int(person.split('_')[-1]))
person_cnt = max(person_num_list)
# 0. clear table in mysql
# cursor.execute("truncate dlib_face_table;")
# 1. check existing people in mysql
cursor.execute("select count(*) from dlib_face_table;")
person_start = int(cursor.fetchall()[0][0])
for person in range(person_start, person_cnt):
# Get the mean/average features of face/personX, it will be a list with a length of 128D
print(path_images_from_camera + "person_" + str(person + 1))
features_mean_personX = return_features_mean_personX(path_images_from_camera + "person_" + str(person + 1))
print("特征均值 / The mean of features:", list(features_mean_personX))
print('\n')
# 2. Insert person 1 to person X
cursor.execute("insert into dlib_face_table(person_x) values(\"person_"+str(person+1)+"\");")
# 3. Insert features for person X
for i in range(128):
# update issue_info set github_status='Open', github_type='bug' where github_id='2222';
print("update dlib_face_table set feature_" + str(i + 1) + '=\"' + str(
features_mean_personX[i]) + "\" where person_x=\"person_" + str(person + 1) + "\";")
cursor.execute("update dlib_face_table set feature_" + str(i + 1) + '=\"' + str(
features_mean_personX[i]) + "\" where person_x=\"person_" + str(person + 1) + "\";")
db.commit()

View File

@ -7,190 +7,183 @@
# Mail: coneypo@foxmail.com
# Created at 2018-05-11
# Updated at 2019-04-12
# Updated at 2020-04-19
import dlib # 人脸处理的库 Dlib
import numpy as np # 数据处理的库 Numpy
import cv2 # 图像处理的库 OpenCv
import cv2 # 图像处理的库 OpenCV
import os # 读写文件
import shutil # 读写文件
import time
# Dlib 正向人脸检测器 / frontal face detector
# Dlib 正向人脸检测器
detector = dlib.get_frontal_face_detector()
# Dlib 68 点特征预测器 / 68 points features predictor
predictor = dlib.shape_predictor('data/data_dlib/shape_predictor_68_face_landmarks.dat')
# OpenCv 调用摄像头 use camera
cap = cv2.VideoCapture(0)
class Face_Register:
def __init__(self):
self.path_photos_from_camera = "data/data_faces_from_camera/"
self.font = cv2.FONT_ITALIC
# 设置视频参数 set camera
cap.set(3, 480)
self.existing_faces_cnt = 0 # 已录入的人脸计数器
self.ss_cnt = 0 # 录入 personX 人脸时图片计数器
self.faces_cnt = 0 # 录入人脸计数器
# 人脸截图的计数器 the counter for screen shoot
cnt_ss = 0
# 之后用来控制是否保存图像的 flag / The flag to control if save
self.save_flag = 1
# 之后用来检查是否先按 'n' 再按 's' / The flag to check if press 'n' before 's'
self.press_n_flag = 0
# 存储人脸的文件夹 the folder to save faces
current_face_dir = ""
self.frame_time = 0
self.frame_start_time = 0
self.fps = 0
# 保存 faces images 的路径 the directory to save images of faces
path_photos_from_camera = "data/data_faces_from_camera/"
# 新建保存人脸图像文件和数据CSV文件夹 / Mkdir for saving photos and csv
def pre_work_mkdir(self):
# 新建文件夹 / make folders to save faces images and csv
if os.path.isdir(self.path_photos_from_camera):
pass
else:
os.mkdir(self.path_photos_from_camera)
# 删除之前存的人脸数据文件夹 / Delete the old data of faces
def pre_work_del_old_face_folders(self):
# 删除之前存的人脸数据文件夹, 删除 "/data_faces_from_camera/person_x/"...
folders_rd = os.listdir(self.path_photos_from_camera)
for i in range(len(folders_rd)):
shutil.rmtree(self.path_photos_from_camera+folders_rd[i])
if os.path.isfile("data/features_all.csv"):
os.remove("data/features_all.csv")
# 新建保存人脸图像文件和数据CSV文件夹
# mkdir for saving photos and csv
def pre_work_mkdir():
# 如果有之前录入的人脸, 在之前 person_x 的序号按照 person_x+1 开始录入 /
# If the old folders exists, start from person_x+1
def check_existing_faces_cnt(self):
if os.listdir("data/data_faces_from_camera/"):
# 获取已录入的最后一个人脸序号 / Get the num of latest person
person_list = os.listdir("data/data_faces_from_camera/")
person_num_list = []
for person in person_list:
person_num_list.append(int(person.split('_')[-1]))
self.existing_faces_cnt = max(person_num_list)
# 新建文件夹 / make folders to save faces images and csv
if os.path.isdir(path_photos_from_camera):
pass
else:
os.mkdir(path_photos_from_camera)
# 如果第一次存储或者没有之前录入的人脸, 按照 person_1 开始录入
# Start from person_1
else:
self.existing_faces_cnt = 0
# 获取处理之后 stream 的帧数 / Get the fps of video stream
def update_fps(self):
now = time.time()
self.frame_time = now - self.frame_start_time
self.fps = 1.0 / self.frame_time
self.frame_start_time = now
pre_work_mkdir()
# 生成的 cv2 window 上面添加说明文字 / putText on cv2 window
def draw_note(self, img_rd):
# 添加说明 / Add some statements
cv2.putText(img_rd, "Face Register", (20, 40), self.font, 1, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "FPS: " + str(self.fps.__round__(2)), (20, 100), self.font, 0.8, (0, 255, 0), 1,
cv2.LINE_AA)
cv2.putText(img_rd, "Faces: " + str(self.faces_cnt), (20, 140), self.font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "N: Create face folder", (20, 350), self.font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "S: Save current face", (20, 400), self.font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Q: Quit", (20, 450), self.font, 0.8, (255, 255, 255), 1, cv2.LINE_AA)
# 获取人脸
def process(self, stream):
# 1. 新建储存人脸图像文件目录 / Uncomment if you need mkdir
# self.pre_work_mkdir()
##### optional/可选, 默认关闭 #####
# 删除之前存的人脸数据文件夹
# delete the old data of faces
def pre_work_del_old_face_folders():
# 删除之前存的人脸数据文件夹
# 删除 "/data_faces_from_camera/person_x/"...
folders_rd = os.listdir(path_photos_from_camera)
for i in range(len(folders_rd)):
shutil.rmtree(path_photos_from_camera+folders_rd[i])
# 2. 删除 "/data/data_faces_from_camera" 中已有人脸图像文件 / Uncomment if want to delete the old faces
self.pre_work_del_old_face_folders()
if os.path.isfile("data/features_all.csv"):
os.remove("data/features_all.csv")
# 3. 检查 "/data/data_faces_from_camera" 中已有人脸文件
self.check_existing_faces_cnt()
# 这里在每次程序录入之前, 删掉之前存的人脸数据
# 如果这里打开,每次进行人脸录入的时候都会删掉之前的人脸图像文件夹 person_1/,person_2/,person_3/...
# If enable this function, it will delete all the old data in dir person_1/,person_2/,/person_3/...
# pre_work_del_old_face_folders()
##################################
while stream.isOpened():
flag, img_rd = stream.read() # Get camera video stream
kk = cv2.waitKey(1)
faces = detector(img_rd, 0) # Use dlib face detector
# 4. 按下 'n' 新建存储人脸的文件夹 / Press 'n' to create the folders for saving faces
if kk == ord('n'):
self.existing_faces_cnt += 1
current_face_dir = self.path_photos_from_camera + "person_" + str(self.existing_faces_cnt)
os.makedirs(current_face_dir)
print('\n')
print("新建的人脸文件夹 / Create folders: ", current_face_dir)
# 如果有之前录入的人脸 / if the old folders exists
# 在之前 person_x 的序号按照 person_x+1 开始录入 / start from person_x+1
if os.listdir("data/data_faces_from_camera/"):
# 获取已录入的最后一个人脸序号 / get the num of latest person
person_list = os.listdir("data/data_faces_from_camera/")
person_num_list = []
for person in person_list:
person_num_list.append(int(person.split('_')[-1]))
person_cnt = max(person_num_list)
self.ss_cnt = 0 # 将人脸计数器清零 / clear the cnt of faces
self.press_n_flag = 1 # 已经按下 'n' / have pressed 'n'
# 如果第一次存储或者没有之前录入的人脸, 按照 person_1 开始录入
# start from person_1
else:
person_cnt = 0
# 5. 检测到人脸 / Face detected
if len(faces) != 0:
# 矩形框 / Show the HOG of faces
for k, d in enumerate(faces):
# 计算矩形框大小 / Compute the size of rectangle box
height = (d.bottom() - d.top())
width = (d.right() - d.left())
hh = int(height/2)
ww = int(width/2)
# 之后用来控制是否保存图像的 flag / the flag to control if save
save_flag = 1
# 之后用来检查是否先按 'n' 再按 's' / the flag to check if press 'n' before 's'
press_n_flag = 0
while cap.isOpened():
flag, img_rd = cap.read()
# print(img_rd.shape)
# It should be 480 height * 640 width
kk = cv2.waitKey(1)
img_gray = cv2.cvtColor(img_rd, cv2.COLOR_RGB2GRAY)
# 人脸数 faces
faces = detector(img_gray, 0)
# 待会要写的字体 / font to write
font = cv2.FONT_HERSHEY_COMPLEX
# 按下 'n' 新建存储人脸的文件夹 / press 'n' to create the folders for saving faces
if kk == ord('n'):
person_cnt += 1
current_face_dir = path_photos_from_camera + "person_" + str(person_cnt)
os.makedirs(current_face_dir)
print('\n')
print("新建的人脸文件夹 / Create folders: ", current_face_dir)
cnt_ss = 0 # 将人脸计数器清零 / clear the cnt of faces
press_n_flag = 1 # 已经按下 'n' / have pressed 'n'
# 检测到人脸 / if face detected
if len(faces) != 0:
# 矩形框 / show the rectangle box
for k, d in enumerate(faces):
# 计算矩形大小
# we need to compute the width and height of the box
# (x,y), (宽度width, 高度height)
pos_start = tuple([d.left(), d.top()])
pos_end = tuple([d.right(), d.bottom()])
# 计算矩形框大小 / compute the size of rectangle box
height = (d.bottom() - d.top())
width = (d.right() - d.left())
hh = int(height/2)
ww = int(width/2)
# 设置颜色 / the color of rectangle of faces detected
color_rectangle = (255, 255, 255)
# 判断人脸矩形框是否超出 480x640
if (d.right()+ww) > 640 or (d.bottom()+hh > 480) or (d.left()-ww < 0) or (d.top()-hh < 0):
cv2.putText(img_rd, "OUT OF RANGE", (20, 300), font, 0.8, (0, 0, 255), 1, cv2.LINE_AA)
color_rectangle = (0, 0, 255)
save_flag = 0
if kk == ord('s'):
print("请调整位置 / Please adjust your position")
else:
color_rectangle = (255, 255, 255)
save_flag = 1
cv2.rectangle(img_rd,
tuple([d.left() - ww, d.top() - hh]),
tuple([d.right() + ww, d.bottom() + hh]),
color_rectangle, 2)
# 根据人脸大小生成空的图像 / create blank image according to the size of face detected
im_blank = np.zeros((int(height*2), width*2, 3), np.uint8)
if save_flag:
# 按下 's' 保存摄像头中的人脸到本地 / press 's' to save faces into local images
if kk == ord('s'):
# 检查有没有先按'n'新建文件夹 / check if you have pressed 'n'
if press_n_flag:
cnt_ss += 1
for ii in range(height*2):
for jj in range(width*2):
im_blank[ii][jj] = img_rd[d.top()-hh + ii][d.left()-ww + jj]
cv2.imwrite(current_face_dir + "/img_face_" + str(cnt_ss) + ".jpg", im_blank)
print("写入本地 / Save into:", str(current_face_dir) + "/img_face_" + str(cnt_ss) + ".jpg")
# 6. 判断人脸矩形框是否超出 480x640
if (d.right()+ww) > 640 or (d.bottom()+hh > 480) or (d.left()-ww < 0) or (d.top()-hh < 0):
cv2.putText(img_rd, "OUT OF RANGE", (20, 300), self.font, 0.8, (0, 0, 255), 1, cv2.LINE_AA)
color_rectangle = (0, 0, 255)
save_flag = 0
if kk == ord('s'):
print("请调整位置 / Please adjust your position")
else:
print("请在按 'S' 之前先按 'N' 来建文件夹 / Please press 'N' before 'S'")
color_rectangle = (255, 255, 255)
save_flag = 1
# 显示人脸数 / show the numbers of faces detected
cv2.putText(img_rd, "Faces: " + str(len(faces)), (20, 100), font, 0.8, (0, 255, 0), 1, cv2.LINE_AA)
cv2.rectangle(img_rd,
tuple([d.left() - ww, d.top() - hh]),
tuple([d.right() + ww, d.bottom() + hh]),
color_rectangle, 2)
# 添加说明 / add some statements
cv2.putText(img_rd, "Face Register", (20, 40), font, 1, (0, 0, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "N: New face folder", (20, 350), font, 0.8, (0, 0, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "S: Save current face", (20, 400), font, 0.8, (0, 0, 0), 1, cv2.LINE_AA)
cv2.putText(img_rd, "Q: Quit", (20, 450), font, 0.8, (0, 0, 0), 1, cv2.LINE_AA)
# 7. 根据人脸大小生成空的图像 / Create blank image according to the shape of face detected
img_blank = np.zeros((int(height*2), width*2, 3), np.uint8)
# 按下 'q' 键退出 / press 'q' to exit
if kk == ord('q'):
break
if save_flag:
# 8. 按下 's' 保存摄像头中的人脸到本地 / Press 's' to save faces into local images
if kk == ord('s'):
# 检查有没有先按'n'新建文件夹 / Check if you have pressed 'n'
if self.press_n_flag:
self.ss_cnt += 1
for ii in range(height*2):
for jj in range(width*2):
img_blank[ii][jj] = img_rd[d.top()-hh + ii][d.left()-ww + jj]
cv2.imwrite(current_face_dir + "/img_face_" + str(self.ss_cnt) + ".jpg", img_blank)
print("写入本地 / Save into:", str(current_face_dir) + "/img_face_" + str(self.ss_cnt) + ".jpg")
else:
print("请先按 'N' 来建文件夹, 按 'S' / Please press 'N' and press 'S'")
self.faces_cnt = len(faces)
# 如果需要摄像头窗口大小可调 / uncomment this line if you want the camera window is resizeable
# cv2.namedWindow("camera", 0)
# 9. 生成的窗口添加说明文字 / Add note on cv2 window
self.draw_note(img_rd)
cv2.imshow("camera", img_rd)
# 10. 按下 'q' 键退出 / Press 'q' to exit
if kk == ord('q'):
break
# 释放摄像头 / release camera
cap.release()
self.update_fps()
cv2.namedWindow("camera", 1)
cv2.imshow("camera", img_rd)
cv2.destroyAllWindows()
def run(self):
cap = cv2.VideoCapture(0)
cap.set(3, 640)
self.process(cap)
cap.release()
cv2.destroyAllWindows()
def main():
Face_Register_con = Face_Register()
Face_Register_con.run()
if __name__ == '__main__':
main()

View File

@ -1,4 +1,4 @@
# OpenCv 调用摄像头
# OpenCV 调用摄像头
# 默认调用笔记本摄像头
# Author: coneypo
@ -10,9 +10,35 @@ import cv2
cap = cv2.VideoCapture(0)
# cap.set(3, 480)
# cap.set(propId, value)
# 设置视频参数: propId - 设置的视频参数, value - 设置的参数值
cap.set(3, 480)
"""
0. cv2.CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
1. cv2.CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
2. cv2.CAP_PROP_POS_AVI_RATIO Relative position of the video file
3. cv2.CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
4. cv2.CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
5. cv2.CAP_PROP_FPS Frame rate.
6. cv2.CAP_PROP_FOURCC 4-character code of codec.
7. cv2.CAP_PROP_FRAME_COUNT Number of frames in the video file.
8. cv2.CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
9. cv2.CAP_PROP_MODE Backend-specific value indicating the current capture mode.
10. cv2.CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
11. cv2.CAP_PROP_CONTRAST Contrast of the image (only for cameras).
12. cv2.CAP_PROP_SATURATION Saturation of the image (only for cameras).
print 14. cv2.CAP_PROP_GAIN Gain of the image (only for cameras).
15. cv2.CAP_PROP_EXPOSURE Exposure (only for cameras).
16. cv2.CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
17. cv2.CAP_PROP_WHITE_BALANCE Currently unsupported
18. cv2.CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
"""
# The default shape of camera will be 640x480 in Windows or Ubuntu
# So we will not set "cap.set" here, it doesn't work
# print(cv2.CAP_PROP_FRAME_WIDTH)
# print(cv2.CAP_PROP_FRAME_HEIGHT)
cap.set(3, 640)
# cap.isOpened() 返回 true/false, 检查摄像头初始化是否成功
print(cap.isOpened())
@ -38,6 +64,11 @@ print(cap.isOpened())
while cap.isOpened():
ret_flag, img_camera = cap.read()
print("height: ", img_camera.shape[0])
print("width: ", img_camera.shape[1])
print('\n')
cv2.imshow("camera", img_camera)
# 每帧数据延时 1ms, 延时为0, 读取的是静态帧

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
introduction/face_reco_single_person.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 428 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 499 KiB

BIN
introduction/face_reco_two_people_in_database.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 425 KiB

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
introduction/get_face_from_camera.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 416 KiB

After

Width:  |  Height:  |  Size: 1.3 MiB

BIN
introduction/get_face_from_camera_out_of_range.png Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 433 KiB

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.