This commit is contained in:
louiscklaw
2025-02-01 02:10:34 +08:00
commit b79d5bc270
302 changed files with 62709 additions and 0 deletions

31
.gitattributes vendored Normal file
View File

@@ -0,0 +1,31 @@
*.mp4 filter=lfs diff=lfs merge=lfs
*.zip filter=lfs diff=lfs merge=lfs
*.7z filter=lfs diff=lfs merge=lfs
*.tar.gz filter=lfs diff=lfs merge=lfs
*.jpg filter=lfs diff=lfs merge=lfs
*.png filter=lfs diff=lfs merge=lfs
*.avif filter=lfs diff=lfs merge=lfs
*.webm filter=lfs diff=lfs merge=lfs
*.mkv filter=lfs diff=lfs merge=lfs
# Documents
*.doc diff=astextplain
*.DOC diff=astextplain
*.docx diff=astextplain
*.DOCX diff=astextplain
*.dot diff=astextplain
*.DOT diff=astextplain
*.pdf diff=astextplain
*.PDF diff=astextplain
*.rtf diff=astextplain
*.RTF diff=astextplain
*.gif filter=lfs diff=lfs merge=lfs
*.GIF filter=lfs diff=lfs merge=lfs
*.bmp filter=lfs diff=lfs merge=lfs
*.BMP filter=lfs diff=lfs merge=lfs
*.tiff filter=lfs diff=lfs merge=lfs
*.TIFF filter=lfs diff=lfs merge=lfs
*.wav filter=lfs diff=lfs merge=lfs
*.WAV filter=lfs diff=lfs merge=lfs
*.log filter=lfs diff=lfs merge=lfs

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
**/~*.*

1
_lab/test.sh Normal file
View File

@@ -0,0 +1 @@
ubuntu:22.04

16
gitUpdate.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -ex
git config --global http.version HTTP/1.1
git config --global lfs.allowincompletepush true
git config --global lfs.locksverify true
git config --global http.postBuffer 5368709120
git add .
git commit -m 'update,'
git push
echo "done"

5
meta.md Normal file
View File

@@ -0,0 +1,5 @@
---
tags: machine-learning
---
discord: kriss5644

45
quote1/NOTES.md Normal file
View File

@@ -0,0 +1,45 @@
Hi,寶達邨的豬~?😂😂
你好呀,想搵你代寫 coding, 想問價的 🥹
份 project 係關於 design and development of an AI LIDAR sensor for fall detection
但係份 project 已經做左部份,有少少諗法
打算用 LSTM + Openpose 去做一個 skeleton based 嘅 human fall detection
之後再加小小 soft alarm 整到個 system 好似多啲功能咁
所以最後需要一個 gui 去整合所有功能
個 Lidar sensor 就係現成嘅,所有 training dataset 打算都係用普通嘅 camera video train 左先,之後再加 d lidar sensor 嘅 video 再 train
份 project 係想盡量做到 real time detection 嘅
Avatar
如果你方便嘅話,我想分享埋而家遇到嘅問題俾你睇下 🥹😂😂
```
你好,我想問:
1. ug / ive / dip ?
2. 你係 solo project 定係 group project ?
3. lidar hardware 你攪得掂?
4. 你想我落 code 係邊部份?
5. deadline 幾時?
6. 用緊 tensorflow/pytorch ?
7. 你自己有冇 GPU ?
---
1. Ug
2. Group project
3. 只要stream 到2個screen 出嚟就搞得掂例如stream 右上角左下角of the screen?)
4. 主要想你幫手LSTM + open pose 果part, 尤其係realtime output json data from open pose library, 再轉返去csv for model testing
如果可以嘅話想你幫手搞埋個gui 😖 特別係stream 兩個image (depth image & lidar image)出嚟再process 暫時諗落成三個path, 唔知會唔會炒車或者到時可以send 個demo 片俾你試吓)
5. Deadline 係三月尾
6. 本身用緊Google colab但係太多限制正打算轉個platform
7. Google colab 有?
```
https://drive.google.com/drive/folders/1OZsp-3nGTGX9oHfPpXmNgNCWI1fRevAS
---
https://github.com/ekramalam/GMDCSA24-A-Dataset-for-Human-Fall-Detection-in-Videos/tree/master

Binary file not shown.

Submodule quote1/_ref/Fall_Detection_Deep_Learning_Model added at 7040cdd021

Submodule quote1/_ref/neural-compressor added at a617115b14

View File

Submodule quote1/_ref/openpose/openpose-docker added at 5d6a9497ba

35
quote1/digest/glances.md Normal file
View File

@@ -0,0 +1,35 @@
Problem facing
## Questions
openpose 加 dataset
因為 openpose + LSTM/CNN + transfer training 應該可以做到要求
加埋做到 坐低 detection
期間可能要搵同學仔幫手做 classifing
我想問下有咩野我可以用?
- 主機?
- gpu ?
- (better to bound a pc for presentation)
我想問下 lidar sensor 嘅接駁方式
- usb ?
- wifi / lan ?
---
![alt text](image-1.png)
depth image
![alt text](image-2.png)
lidar image
depth image --> feed 落 openpose --> result 一偵偵咁出 (問題所在)
---
塞個 yolo 入去 priority lower

BIN
quote1/digest/image-1.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
quote1/digest/image-2.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
quote1/digest/image.png (Stored with Git LFS) Normal file

Binary file not shown.

View File

@@ -0,0 +1 @@
- https://github.com/YJZFlora/Fall_Detection_Deep_Learning_Model

55
quote1/digest/whole.md Normal file
View File

@@ -0,0 +1,55 @@
Problem facing
## Questions
我唔明點解要塞個 yolo 入去
因為 openpose + LSTM + transfer training 應該可以做到要求
加埋做到 坐低 detection
期間可能要搵同學仔幫手做 classifing
我想問下有咩野我可以用?
- 主機?
- gpu ?
我想問下 lidar sensor 嘅接駁方式
- usb ?
- wifi / lan ?
1. 個 lidar sensor, 唔可以直接 stream 個畫面出嚟,所以應該需要一條 code stream 個 screen 出嚟 例如螢幕右上角size 就要再 confirm)
1. 唔好信佢
2. 係 LSTM + openpose 方面,個 model 基本上算係砌好。(參考 https://github.com/YJZFlora/Fall_Detection_Deep_Learning_Model但係因為 d training dataset 係本身人哋 GitHub 上面已經 go through 左 openpose 嘅 json file 再已經轉成左 csv 檔嘅樣,所以當想加自己嘅 dataset 時,係呢方面遇到問題。試過將條片 go through openpose, 但係出嚟嘅 json file,係每一幀一個 file, 而唔係一個 file 包含曬全條片。個思路係 Video -> Images -> OpenPose skeletal point localization on human -> saved as json file -> converted into csv file -> ready to go for LSTM model for fall prediction
3. 另外因為想嘗試整到 real time但係未話好有方向
1. 暫時唯一嘅參考Forum: Is it possible to output JSON data from the OpenPose library in real-time?https://stackoverflow.com/questions/57061757/is-it-possible-to-output-json-data-from-the-openpose-library-in-real-time
4. Soft alarm 方面,打算做得比較獨立少少,而且可以選擇 on/off。其中一個 soft alarm 打算用 yolo v11 認 posture image based), 例如認 human sitting on chair, human sitting on floor, neither (呢 part 嘅 yolo 已經 train 好左)。呢個 soft alarm 功能,係打算係係特定環境先用,例如病房,當 human sitting on floor係病房算係比較唔合理嘅行為所以 soft alarm 就會響。
5. 然後係 gui 方面,呢 part 暫時未有太多研究,所以暫時嘅諗法比較複雜。為左方便 train 同 test, 會用個”lidar sensor” capture 出嚟嘅 Depth images. 但係呢個 raw 嘅 depth images 要 go through LSTM for skeleton based 嘅 human fall detection path1), 當 falling 嘅時候,個 red alarm will be triggered。然後同時 if switch on the soft alarm, 個 raw 嘅 depth images 要 go through yolo for image based 嘅 detection (path2), if sitting on floor, soft alarm will be triggered. 再另外為左保障私隱(題目要求),只可以 display 個 lidar sensor capture 出嚟嘅 Lidar output (path3), 同時想係呢個 output 加個 openpose 嘅 display (參考https://www.youtube.com/watch?v=9jQGsUidKHs PS: 所以實際上要 stream 到 2 個 independent 嘅 screen 出嚟depth image and lidar image)
6. 仲有暫時所有 LSTM 嘅 dataset 都係得一個人係度跌,但係希望個 system 可以盡量認到多於一個人嘅 case ( 2 個人 💩)。可能加個 id 入 skeleton 度indicate patient 1 跌左patient 2 normal. 如果呢度搞唔到,都唔緊要,最緊要係即使係多於一個人嘅情況,認到有人 fall 就可以。Btw 份 project target for indoor use.
## New block diagram
![alt text](image.png)
## Intermediate Report (Just for reference)
### ABSTRACT
With the aging population in Hong Kong, falls have emerged as a significant health threat, responsible for substantial morbidity and mortality among seniors, with significant implications for healthcare systems and quality of life among the elderly. This project focuses on designing and developing an artificial intelligence-based LiDAR sensor system that is non-invasive and accurate for real-time fall detection for the elderly, particularly in hospital settings. Enhancing safety in hospital environments by issuing timely alarms when falls occur, can minimize medical resource waste and improve patient outcomes. This project introduces the collection and preparation of datasets for normal activities and fall events, model development using YOLO and LSTM for object and human detection, and extensive testing of the system's effectiveness.
We conducted a thorough literature review, comparing different types of LiDAR sensors, ultimately favoring 3D LiDAR for not only its non-invasive nature and superior depth perception compared to traditional RGB cameras but also their cost-effectiveness, ability to produce 3D point cloud data, and high accuracy. By comparing current human detection models, we can find the fitting algorithm for improving the detection outcomes to identify human pose and action for human fall detection and use the human skeleton keypoint recognition method. Compared to conventional methods, which often rely on intrusive surveillance or wearable devices, the combination of AI and LiDAR technology significantly enhances the ability to detect falls. Additionally, the project addressed several limitations of current human fall detection models, such as the confusion between lying down, sleeping, and falling motions, ultimately leading to improved accuracy and reliability in fall detection.
The methodology section details the hardware setup utilizing the XT-S240 Mini LiDAR sensor, the data collection from various sources, including the UR Fall Detection Dataset, Multiple Camera Dataset, and YouTube dataset, data processing, landmark extraction, and the development of two primary models, YOLO and LSTM. We created a custom dataset specifically tailored to hospital settings, which involved capturing and annotating images of essential objects such as beds and chairs for the YOLO model. Furthermore, we integrated human pose estimation through OpenPose for body landmark extraction and applied LSTM networks to analyze temporal movement patterns indicative of falls, thus enhancing the system's ability to recognize dynamic fall scenarios accurately.
To summarize the progress made to date, we developed a comprehensive dataset, which were annotated for training purposes. We developed two main algorithms: YOLO for object detection in complex environments and LSTM for analyzing temporal sequences of body movements to identify falls. Also, this project includes hardware functionality tests, LSTM model parameter optimization, and the challenges faced in preparing suitable datasets for YOLO training.
In future directions, we propose further enhancements on the system's capabilities, a user-friendly graphical user interface, two operational modes to adapt the system to various environments, and exploring innovative soft alarm solutions such as voice activation features for alarm triggering. Moreover, the detection algorithms must be optimized further to ensure rapid responses in critical situations and thus the system's operational readiness in real-world applications.
In conclusion, this report highlights the importance of developing innovative solutions for fall detection and combining advanced sensor technology with machine learning to tackle a critical health issue affecting the elderly population. The innovations introduced in this project will illustrate a step forward in fall detection technology, promising to contribute positively to the well-being of vulnerable populations.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,6 @@
# FYP Dataset > 2023-12-04 2:07am
https://universe.roboflow.com/nstp/fyp-dataset-pxqgu
Provided by a Roboflow user
License: CC BY 4.0

View File

@@ -0,0 +1,29 @@
FYP Dataset - v2 2023-12-04 2:07am
==============================
This dataset was exported via roboflow.com on December 3, 2023 at 9:10 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 262 images.
Furniture-ERWu are annotated in COCO Segmentation format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More