Implementation Steps Define the request object. Call the skeleton detection processing method. Extract key point data. Obtain specific key points based on their types.
Implementation Code
1. Define the Request Object
const request: visionBase.Request = { inputData: { pixelMap } }
2. Call the Skeleton Detection Processing Method
const data = await skeletonDetector.process(request)
3. Extract Key Point Data
const points = data.skeletons[0].points
4. Lower the Confidence Threshold and Define the Method to Obtain Key Points
const MIN_CONFIDENCE = 0.3; // Lower the confidence threshold const getPoint = (type: skeletonDetection.SkeletonPointType) => points.find(p => p.type === type && p.score >= MIN_CONFIDENCE)?.point;
5. Obtain Specific Key Points
// Obtain key points (prefer the nose, then the ears) const nose = getPoint(skeletonDetection.SkeletonPointType.NOSE); const leftShoulder = getPoint(skeletonDetection.SkeletonPointType.LEFT_SHOULDER); const rightShoulder = getPoint(skeletonDetection.SkeletonPointType.RIGHT_SHOULDER); let leftEar = getPoint(skeletonDetection.SkeletonPointType.LEFT_EAR); let rightEar = getPoint(skeletonDetection.SkeletonPointType.RIGHT_EAR);
Conclusion
The key knowledge point of this case lies in how to perform skeleton detection using the ArkTS language in HarmonyOS 5.0. By defining the request object, calling the processing method, and extracting key point data, the basic skeleton detection function is implemented. At the same time, lowering the confidence threshold can enhance the flexibility of detection to some extent, making the detection results more in line with actual requirements.