Meta released a new artificial intelligence (AI) model this week that has the ability to identify specific objects in an image, as well as an entire dataset of image annotations that is said to be the largest to-date. The Segment Anything Model, or SAM, for short, is impressively advanced; it can recognize items in images and videos that it has not encountered in its initial training, according to Meta’s blog post.
With SAM, Meta aimed to simultaneously create a general, promptable segmentation model and utilize its functions to build out its segmentation dataset, which it calls the Segment Anything 1-Billion mask dataset (SA-1B), at an unmatched scale. The technology can develop masks for any object in any image or video, and it’s general enough to include a broad set of image “domains,” like those taken underwater or of cell microscopy.
In practice, SAM lets users segment objects with a simple click or by interactively selecting points to include or exclude from an object. In the face of ambiguity, the AI can output several valid masks, and it can automatically identify and mask all objects in a single image or video. In one example, entering the term “cat” prompted SAM to create masks for several cats in a single image.
In more tech news, Google is going to integrate conversational AI into the search bar.