How to Raycast from the Camera to Mouse Position


Júlio Rodrigues ·

There are several occasions in games where it's necessary to know what the player clicked, it's one of the fundamental ways to interact in the digital world, after all.

Unfortunately, there is a problem. The user can only click the screen, in 2d coordinates, but in general, we're interested in what does it represent in the 3d game world. We need to translate this screen position to a world position and one way to do this is by imagining that a ray goes from the camera through a screen point to the game world.

The ScreenPointToRay() function

This is done using a raycast and the helper function ScreenPointToRay()) from the camera object. On this use case, we're using the mouse position as input. For more information on this function see the official documentation.

The key to understand this is making believe that there's a void between the camera and the near clipping plane, you can think of the near clipping plane as if it was the screen. Now, it should be easy to imagine the direction of the ray we're casting. See more about the near clipping plane and the View Frustum—the concept from where it belongs.

This is how it can be done in a GC friendly way:

Ray ray = cam.ScreenPointToRay(Input.mousePosition);
int n = Physics.RaycastNonAlloc(ray, results, 100f, ValidLayers);
if (n > 0)
{
    for (int i = 0; i < n; i++)
    {
        // do something with results[i];
    }
}

A sample using object placement

More than just pointing out the functions and object properties you should use to achieve the desired result I believe that it's always helpful to show an example of what can be done with a technique. For this, I'm presenting this really common feature of object placement in a room.

Notice that there's a z-ordering issue when the bed is no longer transparent, this can be fixed by properly setting all the shader properties of the standard shader to render the object as opaque. See more on rendering modes in the official documentation.

The following code has some shortcuts (commented in the code) that shouldn't be used in production.

using UnityEngine;

public class BedPositioner: MonoBehaviour {

 public GameObject BedPrefab;
 public LayerMask ValidLayers;
 public Color Tint;

 private Color originalColor;
 private GameObject floatingBed;
 private Camera cam;
 private readonly RaycastHit[] results = new RaycastHit[5];
 private bool trackingPosition = true;

 private void Start() {
  cam = Camera.main;
  floatingBed = Instantiate(BedPrefab, transform, false);
  originalColor = floatingBed.GetComponent < Renderer > ().material.color;
  SetBedTint(Tint);
 }

 /**
  * Non-production code.
  */
 private void SetBedTint(Color tint) {
  var material1 = floatingBed.GetComponent < Renderer > ().material;
  var material2 = floatingBed.transform.GetChild(0).GetComponent < Renderer > ().material;
  var material3 = floatingBed.transform.GetChild(1).GetComponent < Renderer > ().material;
  material1.SetColor("_Color", tint);
  material2.SetColor("_Color", tint);
  material3.SetColor("_Color", tint);
 }

 private void Update() {
  Ray ray = cam.ScreenPointToRay(Input.mousePosition);
  // Debug.DrawRay(ray.origin, ray.direction);

  if (trackingPosition) {
   int n = Physics.RaycastNonAlloc(ray, results, 100 f, ValidLayers);
   if (n > 0) {
    for (int i = 0; i < n; i++) {
     floatingBed.transform.position = results[i].point;
    }
   }
   if (Input.GetMouseButton(0)) {
    trackingPosition = false;
    SetBedTint(originalColor);
   }
  }
 }
}

The interesting stuff is happening inside Update. There we have the the raycast test and also the left mouse button check. There's also a flag—trackingPosition to control whether the bed is still being positioned.

When you're casting rays in Unity it's always important to do it only on specific layers, do that to avoid doing useless checks. It's also wise to limit your ray distance. Another good tip is to use the NonAlloc version of the ray cast when you do it on every frame.

What about OnMouseDown()? Isn't it easier?

An alternative to all that we did is to just use OnMouseDown() which is perfectly valid in a lot of cases, even more if you're still in the prototyping phase. It has two main downsides.

The first one that usually concerns most game developers is performance. Since this is a magic function in Unity there's a lot of overhead that could be avoided. Briefly, the reason for the perf cost is that Unity has to do both 2D and 3D raycasts, the ignore layermask is not configurable, and the engine has to figure out all the components that need the OnMouse* functions to be called. All this is generally wasted work since in a real game we know better what's actually necessary.

The other reason is the lack of control in terms of code design. Since it's necessary to scatter a lot of OnMouse* functions on several objects it's easy to fall in a maintenance nightmare. It's tempting to try to solve the functionality of these clicks in the script that has the OnMouse* function itself, instead of delegating, creating some really hard to understand data flows. It can be avoided if you manually create a simplistic event bubbling system. It's so much work that it's not worth it, simply having a single script to handle all the mouse clicks in the game world using raycasts is easier to maintain.

Now that you know where the player clicked in your game world, go ahead and implement that core feature in your game! Let me know about your game on [email protected]. If you want to get an email whenever we publish a new article, the best way is through a subscription!

Detailed Unity Tutorials in your inbox

Subscribe if you want to receive articles and tutorials like this one right in your inbox. 

Classify in:
  • raycast
  • camera
  • scripting
© Bladecast 2018. All rights reserved.