Dear all,
I’m trying to make a kind of Pokémon game using google maps and locations of monuments, play grounds, etc. So far so good. Everything working on my phone using the sensors and gps. Gdevelop made me doing it. Now the final part is looking through the phone camera to find the target and grab it. It is very easy to grab video/images from your phone using navigator.mediaDevices. These days the basic for WEbrtc webapps. The most easy way is rendering the camera on a canvas. However trying this I got in conflict with the Gdevelop canvas. Is there a way to work around this? I did think about grabbing images from a hidden canvas, but how do I get it in Gdevelop?
Regards,
Michel
Hey! You need to load an image like camera loading and set it to a sprite. Then use some Js code to get that camera in a canvas (make sure it has another ID than the one from GDevelop). Then create a pixi texture from the canvas getting input from the camera and replace the “camera loading” texture with the one gotten from the canvas.
Dear Arthuro555,
Many thanks. I followed your idea and have it working in the sense I see the video on the “video element”, but it is not succesfull in rendering on the canvas. Actually when I try to draw the video stream on the Canvas nothing happens. It seems to be empty. all properties from the video element are zero. I used this javascript to add the elements to the HTML:
//INIT++++++++++++++++++++++++++++++++++++++++++++++++++++++++
const VideoRenderer = document.createElement("video");//Create HTML video element
const VideoCanvas = document.createElement("canvas");//Create HTML canvas element
const PictureTexture = document.createElement("img");//Create HTML img element
//Create an identifier t
VideoRenderer.id="camera--view";
VideoCanvas.id="camera--sensor";
PictureTexture.id="camera--output";
VideoRenderer.autoplay=true; //works
VideoCanvas.width=320; //does not seem to work
VideoCanvas.height=240; //does not seem to work
PictureTexture.src="//:0";
document.body.appendChild(VideoRenderer);//Add input in HTML
document.body.appendChild(VideoCanvas);//Add input in HTML
document.body.appendChild(PictureTexture);//Add input in HTML
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
After this based on a button press I fire the next java script:
//++++++++grab picture from camara and render+++++++++++++++++++++
var constraints = { video: true};//video only
var cameraView = document.getElementById("camera--view");//video element
var cameraCanvas = document.getElementById("camera--sensor");//canvas
var cameraImage= document.getElementById("camera--output");//image
function cameraStart() {
navigator.mediaDevices
.getUserMedia(constraints)
.then(handleSuccess)
.catch(function(error) {
console.error("Oops. Something is broken.", error);
});
}
function handleSuccess(stream) {
cameraView.srcObject = stream;//!!!!works I get video in the video element
cameraCanvas.width = 640;// hardcoded because thgis returns zero -->cameraView.videoWidth;
cameraCanvas.height = 480;// hardcoded because thgis returns zero -->cameraView.videoHeight;
console.log("width "+cameraView.videoWidth+" height "+cameraView.videoHeight); //always zero
cameraCanvas.getContext("2d").drawImage(cameraView, 0, 0,640,480);
cameraImage.src = cameraCanvas.toDataURL("image/webp");
cameraImage.classList.add("taken");
}
cameraStart();
//+++++++++++++++++end++++++++++++++++++++++++++
so what does not seem to work is cameraView.srcObject = stream; there is video on the video element but cameraview does not seem to have any property related like videoWidth etc. What is wrong?
Hmm… Maybe try to draw the video into the canvas only when it starts playing/has finished loading. I am also not sure why you have that image element? If you are looking at the canvas through that image that might be the issue. What I would do in either case is replacing
cameraCanvas.getContext("2d").drawImage(cameraView, 0, 0,640,480);
with something like
cameraView.addEventListener('play', function () {
let that = this; // Cache reference to cameraView for the closure
(function loop() {
if (!that.paused && !that.ended) { // Adapt to your needs
cameraCanvas.getContext(“2d”).drawImage(that, 0, 0);
setTimeout(loop, 1000 / 30); // drawing at 30fps
}
})();
}, 0);
You are a hero. It works after changing cameraCanvas.drawImage(that, 0, 0); to cameraCanvas.getContext(“2d”).drawImage(that, 0, 0);
Up to next step, rendering the canvas on a pixi texture:)
ok the final stage is almost there, but again a question. I did this:
cameraView.addEventListener('play', function () {
cameraImage.src = cameraCanvas.toDataURL("image/webp");
cameraImage.classList.add("taken");
let that = this; // Cache reference to cameraView for the closure
var object_texture_image = runtimeScene.getObjects("VideoPlayerM"); // Get all the objects called "VideoPlayerM" which is a sprite
var object_texture_image_renderer = object_texture_image[0].getRendererObject(); // get the object called "VideoPlayerM" and get this renderer (PIXI sprite)
(function loop() {
if (!that.paused && !that.ended) { // Adapt to your needs
cameraCanvas.getContext("2d").drawImage(that, 0, 0);
object_texture_image_renderer.texture = PIXI.Texture.fromCanvas(cameraView); //here we update the texture I hoped
setTimeout(loop, 1000 / 30); // drawing at 30fps
}
})();
}, 0);
So what works is that the texture is empty now/cleared, but not the camera view. I guess I’m again are asking for the texture when not completed however in this same Loop I render succesfully on the html canvas.
I think the error might come from:
PIXI.Texture.fromCanvas(cameraView);
I think you actually want to pass cameraCanvas
(as the texture is created from a canvas):
PIXI.Texture.fromCanvas(cameraCanvas);
I still don’t understand why you have cameraImage. Can you explain me why you put it there?
hmmm, you are right. Stupid of mine. But it works. many thanks, case closed.
3 Likes
This needs to be an extension.
3 Likes
seems like PANDAKO is building this as an extension…