update,
This commit is contained in:
31
.gitattributes
vendored
Normal file
31
.gitattributes
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
*.mp4 filter=lfs diff=lfs merge=lfs
|
||||
*.zip filter=lfs diff=lfs merge=lfs
|
||||
*.7z filter=lfs diff=lfs merge=lfs
|
||||
*.tar.gz filter=lfs diff=lfs merge=lfs
|
||||
*.jpg filter=lfs diff=lfs merge=lfs
|
||||
*.png filter=lfs diff=lfs merge=lfs
|
||||
*.avif filter=lfs diff=lfs merge=lfs
|
||||
*.webm filter=lfs diff=lfs merge=lfs
|
||||
*.mkv filter=lfs diff=lfs merge=lfs
|
||||
|
||||
# Documents
|
||||
*.doc diff=astextplain
|
||||
*.DOC diff=astextplain
|
||||
*.docx diff=astextplain
|
||||
*.DOCX diff=astextplain
|
||||
*.dot diff=astextplain
|
||||
*.DOT diff=astextplain
|
||||
*.pdf diff=astextplain
|
||||
*.PDF diff=astextplain
|
||||
*.rtf diff=astextplain
|
||||
*.RTF diff=astextplain
|
||||
|
||||
*.gif filter=lfs diff=lfs merge=lfs
|
||||
*.GIF filter=lfs diff=lfs merge=lfs
|
||||
*.bmp filter=lfs diff=lfs merge=lfs
|
||||
*.BMP filter=lfs diff=lfs merge=lfs
|
||||
*.tiff filter=lfs diff=lfs merge=lfs
|
||||
*.TIFF filter=lfs diff=lfs merge=lfs
|
||||
*.wav filter=lfs diff=lfs merge=lfs
|
||||
*.WAV filter=lfs diff=lfs merge=lfs
|
||||
*.log filter=lfs diff=lfs merge=lfs
|
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
**/~*.*
|
1
_lab/test.sh
Normal file
1
_lab/test.sh
Normal file
@@ -0,0 +1 @@
|
||||
ubuntu:22.04
|
16
gitUpdate.sh
Executable file
16
gitUpdate.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -ex
|
||||
|
||||
git config --global http.version HTTP/1.1
|
||||
git config --global lfs.allowincompletepush true
|
||||
git config --global lfs.locksverify true
|
||||
git config --global http.postBuffer 5368709120
|
||||
|
||||
git add .
|
||||
|
||||
git commit -m 'update,'
|
||||
|
||||
git push
|
||||
|
||||
echo "done"
|
45
quote1/NOTES.md
Normal file
45
quote1/NOTES.md
Normal file
@@ -0,0 +1,45 @@
|
||||
Hi,寶達邨的豬~?😂😂
|
||||
|
||||
你好呀,想搵你代寫 coding, 想問價的 🥹
|
||||
|
||||
份 project 係關於 design and development of an AI LIDAR sensor for fall detection
|
||||
|
||||
但係份 project 已經做左部份,有少少諗法
|
||||
打算用 LSTM + Openpose 去做一個 skeleton based 嘅 human fall detection
|
||||
之後再加小小 soft alarm 整到個 system 好似多啲功能咁
|
||||
所以最後需要一個 gui 去整合所有功能
|
||||
|
||||
個 Lidar sensor 就係現成嘅,所有 training dataset 打算都係用普通嘅 camera video train 左先,之後再加 d lidar sensor 嘅 video 再 train
|
||||
|
||||
份 project 係想盡量做到 real time detection 嘅
|
||||
|
||||
Avatar
|
||||
如果你方便嘅話,我想分享埋而家遇到嘅問題俾你睇下 🥹😂😂
|
||||
|
||||
```
|
||||
你好,我想問:
|
||||
1. ug / ive / dip ?
|
||||
2. 你係 solo project 定係 group project ?
|
||||
3. lidar hardware 你攪得掂?
|
||||
4. 你想我落 code 係邊部份?
|
||||
5. deadline 幾時?
|
||||
6. 用緊 tensorflow/pytorch ?
|
||||
7. 你自己有冇 GPU ?
|
||||
|
||||
---
|
||||
|
||||
1. Ug
|
||||
2. Group project
|
||||
3. 只要stream 到2個screen 出嚟,就搞得掂(例如stream 右上角&左下角of the screen?)
|
||||
4. 主要想你幫手LSTM + open pose 果part, 尤其係realtime output json data from open pose library, 再轉返去csv for model testing
|
||||
如果可以嘅話,想你幫手搞埋個gui 😖 特別係stream 兩個image (depth image & lidar image)出嚟再process (暫時諗落成三個path, 唔知會唔會炒車)(或者到時可以send 個demo 片俾你試吓)
|
||||
5. Deadline 係三月尾
|
||||
6. 本身用緊Google colab,但係太多限制,正打算轉個platform
|
||||
7. Google colab 有?
|
||||
```
|
||||
|
||||
https://drive.google.com/drive/folders/1OZsp-3nGTGX9oHfPpXmNgNCWI1fRevAS
|
||||
|
||||
---
|
||||
|
||||
https://github.com/ekramalam/GMDCSA24-A-Dataset-for-Human-Fall-Detection-in-Videos/tree/master
|
BIN
quote1/_ref/108CYUT0652018-003.pdf
Normal file
BIN
quote1/_ref/108CYUT0652018-003.pdf
Normal file
Binary file not shown.
1
quote1/_ref/Fall_Detection_Deep_Learning_Model
Submodule
1
quote1/_ref/Fall_Detection_Deep_Learning_Model
Submodule
Submodule quote1/_ref/Fall_Detection_Deep_Learning_Model added at 7040cdd021
1
quote1/_ref/neural-compressor
Submodule
1
quote1/_ref/neural-compressor
Submodule
Submodule quote1/_ref/neural-compressor added at a617115b14
0
quote1/_ref/openpose/.gitkeep
Normal file
0
quote1/_ref/openpose/.gitkeep
Normal file
1
quote1/_ref/openpose/openpose-docker
Submodule
1
quote1/_ref/openpose/openpose-docker
Submodule
Submodule quote1/_ref/openpose/openpose-docker added at 5d6a9497ba
35
quote1/digest/glances.md
Normal file
35
quote1/digest/glances.md
Normal file
@@ -0,0 +1,35 @@
|
||||
Problem facing
|
||||
|
||||
## Questions
|
||||
|
||||
openpose 加 dataset
|
||||
|
||||
因為 openpose + LSTM/CNN + transfer training 應該可以做到要求
|
||||
加埋做到 坐低 detection
|
||||
|
||||
期間可能要搵同學仔幫手做 classifing
|
||||
|
||||
我想問下有咩野我可以用?
|
||||
|
||||
- 主機?
|
||||
- gpu ?
|
||||
- (better to bound a pc for presentation)
|
||||
|
||||
我想問下 lidar sensor 嘅接駁方式
|
||||
|
||||
- usb ?
|
||||
- wifi / lan ?
|
||||
|
||||
---
|
||||
|
||||

|
||||
depth image
|
||||
|
||||

|
||||
lidar image
|
||||
|
||||
depth image --> feed 落 openpose --> result 一偵偵咁出 (問題所在)
|
||||
|
||||
---
|
||||
|
||||
塞個 yolo 入去 priority lower
|
BIN
quote1/digest/image-1.png
(Stored with Git LFS)
Normal file
BIN
quote1/digest/image-1.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
quote1/digest/image-2.png
(Stored with Git LFS)
Normal file
BIN
quote1/digest/image-2.png
(Stored with Git LFS)
Normal file
Binary file not shown.
BIN
quote1/digest/image.png
(Stored with Git LFS)
Normal file
BIN
quote1/digest/image.png
(Stored with Git LFS)
Normal file
Binary file not shown.
1
quote1/digest/repositories.md
Normal file
1
quote1/digest/repositories.md
Normal file
@@ -0,0 +1 @@
|
||||
- https://github.com/YJZFlora/Fall_Detection_Deep_Learning_Model
|
55
quote1/digest/whole.md
Normal file
55
quote1/digest/whole.md
Normal file
@@ -0,0 +1,55 @@
|
||||
Problem facing
|
||||
|
||||
## Questions
|
||||
|
||||
我唔明點解要塞個 yolo 入去
|
||||
因為 openpose + LSTM + transfer training 應該可以做到要求
|
||||
加埋做到 坐低 detection
|
||||
|
||||
期間可能要搵同學仔幫手做 classifing
|
||||
|
||||
我想問下有咩野我可以用?
|
||||
|
||||
- 主機?
|
||||
- gpu ?
|
||||
|
||||
我想問下 lidar sensor 嘅接駁方式
|
||||
|
||||
- usb ?
|
||||
- wifi / lan ?
|
||||
|
||||
1. 個 lidar sensor, 唔可以直接 stream 個畫面出嚟,所以應該需要一條 code stream 個 screen 出嚟 (例如螢幕右上角,size 就要再 confirm)
|
||||
|
||||
1. 唔好信佢
|
||||
|
||||
2. 係 LSTM + openpose 方面,個 model 基本上算係砌好。(參考 https://github.com/YJZFlora/Fall_Detection_Deep_Learning_Model)但係因為 d training dataset 係本身人哋 GitHub 上面已經 go through 左 openpose 嘅 json file 再已經轉成左 csv 檔嘅樣,所以當想加自己嘅 dataset 時,係呢方面遇到問題。試過將條片 go through openpose, 但係出嚟嘅 json file,係每一幀一個 file, 而唔係一個 file 包含曬全條片。個思路係 Video -> Images -> OpenPose skeletal point localization on human -> saved as json file -> converted into csv file -> ready to go for LSTM model for fall prediction
|
||||
|
||||
3. 另外因為想嘗試整到 real time,但係未話好有方向,
|
||||
|
||||
1. 暫時唯一嘅參考(Forum: Is it possible to output JSON data from the OpenPose library in real-time?https://stackoverflow.com/questions/57061757/is-it-possible-to-output-json-data-from-the-openpose-library-in-real-time )
|
||||
|
||||
4. Soft alarm 方面,打算做得比較獨立少少,而且可以選擇 on/off。其中一個 soft alarm 打算用 yolo v11 認 posture (image based), 例如認 human sitting on chair, human sitting on floor, neither (呢 part 嘅 yolo 已經 train 好左)。呢個 soft alarm 功能,係打算係係特定環境先用,例如病房,當 human sitting on floor,係病房算係比較唔合理嘅行為,所以 soft alarm 就會響。
|
||||
|
||||
5. 然後係 gui 方面,呢 part 暫時未有太多研究,所以暫時嘅諗法比較複雜。為左方便 train 同 test, 會用個”lidar sensor” capture 出嚟嘅 Depth images. 但係呢個 raw 嘅 depth images 要 go through LSTM for skeleton based 嘅 human fall detection (path1), 當 falling 嘅時候,個 red alarm will be triggered。然後同時 if switch on the soft alarm, 個 raw 嘅 depth images 要 go through yolo for image based 嘅 detection (path2), if sitting on floor, soft alarm will be triggered. 再另外為左保障私隱(題目要求),只可以 display 個 lidar sensor capture 出嚟嘅 Lidar output (path3), 同時想係呢個 output 加個 openpose 嘅 display (參考https://www.youtube.com/watch?v=9jQGsUidKHs) (PS: 所以實際上要 stream 到 2 個 independent 嘅 screen 出嚟,depth image and lidar image)
|
||||
|
||||
6. 仲有暫時所有 LSTM 嘅 dataset 都係得一個人係度跌,但係希望個 system 可以盡量認到多於一個人嘅 case ( 2 個人 💩)。可能加個 id 入 skeleton 度,indicate patient 1 跌左,patient 2 normal. 如果呢度搞唔到,都唔緊要,最緊要係即使係多於一個人嘅情況,認到有人 fall 就可以。Btw 份 project target for indoor use.
|
||||
|
||||
## New block diagram
|
||||
|
||||

|
||||
|
||||
## Intermediate Report (Just for reference)
|
||||
|
||||
### ABSTRACT
|
||||
|
||||
With the aging population in Hong Kong, falls have emerged as a significant health threat, responsible for substantial morbidity and mortality among seniors, with significant implications for healthcare systems and quality of life among the elderly. This project focuses on designing and developing an artificial intelligence-based LiDAR sensor system that is non-invasive and accurate for real-time fall detection for the elderly, particularly in hospital settings. Enhancing safety in hospital environments by issuing timely alarms when falls occur, can minimize medical resource waste and improve patient outcomes. This project introduces the collection and preparation of datasets for normal activities and fall events, model development using YOLO and LSTM for object and human detection, and extensive testing of the system's effectiveness.
|
||||
|
||||
We conducted a thorough literature review, comparing different types of LiDAR sensors, ultimately favoring 3D LiDAR for not only its non-invasive nature and superior depth perception compared to traditional RGB cameras but also their cost-effectiveness, ability to produce 3D point cloud data, and high accuracy. By comparing current human detection models, we can find the fitting algorithm for improving the detection outcomes to identify human pose and action for human fall detection and use the human skeleton keypoint recognition method. Compared to conventional methods, which often rely on intrusive surveillance or wearable devices, the combination of AI and LiDAR technology significantly enhances the ability to detect falls. Additionally, the project addressed several limitations of current human fall detection models, such as the confusion between lying down, sleeping, and falling motions, ultimately leading to improved accuracy and reliability in fall detection.
|
||||
|
||||
The methodology section details the hardware setup utilizing the XT-S240 Mini LiDAR sensor, the data collection from various sources, including the UR Fall Detection Dataset, Multiple Camera Dataset, and YouTube dataset, data processing, landmark extraction, and the development of two primary models, YOLO and LSTM. We created a custom dataset specifically tailored to hospital settings, which involved capturing and annotating images of essential objects such as beds and chairs for the YOLO model. Furthermore, we integrated human pose estimation through OpenPose for body landmark extraction and applied LSTM networks to analyze temporal movement patterns indicative of falls, thus enhancing the system's ability to recognize dynamic fall scenarios accurately.
|
||||
|
||||
To summarize the progress made to date, we developed a comprehensive dataset, which were annotated for training purposes. We developed two main algorithms: YOLO for object detection in complex environments and LSTM for analyzing temporal sequences of body movements to identify falls. Also, this project includes hardware functionality tests, LSTM model parameter optimization, and the challenges faced in preparing suitable datasets for YOLO training.
|
||||
|
||||
In future directions, we propose further enhancements on the system's capabilities, a user-friendly graphical user interface, two operational modes to adapt the system to various environments, and exploring innovative soft alarm solutions such as voice activation features for alarm triggering. Moreover, the detection algorithms must be optimized further to ensure rapid responses in critical situations and thus the system's operational readiness in real-world applications.
|
||||
|
||||
In conclusion, this report highlights the importance of developing innovative solutions for fall detection and combining advanced sensor technology with machine learning to tackle a critical health issue affecting the elderly population. The innovations introduced in this project will illustrate a step forward in fall detection technology, promising to contribute positively to the well-being of vulnerable populations.
|
BIN
quote1/from_customer/AI Lidar.pdf
Normal file
BIN
quote1/from_customer/AI Lidar.pdf
Normal file
Binary file not shown.
BIN
quote1/from_customer/Ai Lidar/AI Lidar.docx
Normal file
BIN
quote1/from_customer/Ai Lidar/AI Lidar.docx
Normal file
Binary file not shown.
BIN
quote1/from_customer/Ai Lidar/LSTM Dataset/FYP Dataset.v2i.coco-segmentation/.DS_Store
vendored
Normal file
BIN
quote1/from_customer/Ai Lidar/LSTM Dataset/FYP Dataset.v2i.coco-segmentation/.DS_Store
vendored
Normal file
Binary file not shown.
@@ -0,0 +1,6 @@
|
||||
# FYP Dataset > 2023-12-04 2:07am
|
||||
https://universe.roboflow.com/nstp/fyp-dataset-pxqgu
|
||||
|
||||
Provided by a Roboflow user
|
||||
License: CC BY 4.0
|
||||
|
@@ -0,0 +1,29 @@
|
||||
|
||||
FYP Dataset - v2 2023-12-04 2:07am
|
||||
==============================
|
||||
|
||||
This dataset was exported via roboflow.com on December 3, 2023 at 9:10 PM GMT
|
||||
|
||||
Roboflow is an end-to-end computer vision platform that helps you
|
||||
* collaborate with your team on computer vision projects
|
||||
* collect & organize images
|
||||
* understand and search unstructured image data
|
||||
* annotate, and create datasets
|
||||
* export, train, and deploy computer vision models
|
||||
* use active learning to improve your dataset over time
|
||||
|
||||
For state of the art Computer Vision training notebooks you can use with this dataset,
|
||||
visit https://github.com/roboflow/notebooks
|
||||
|
||||
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
|
||||
|
||||
The dataset includes 262 images.
|
||||
Furniture-ERWu are annotated in COCO Segmentation format.
|
||||
|
||||
The following pre-processing was applied to each image:
|
||||
* Auto-orientation of pixel data (with EXIF-orientation stripping)
|
||||
* Resize to 640x640 (Stretch)
|
||||
|
||||
No image augmentation techniques were applied.
|
||||
|
||||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user