The Vaonis Vespera smart telescope makes it easier than ever to observe the night sky with your iPhone, but at a steep cost.
The Vaonis Vespera telescope brings a sense of automation to astrophotography as an app-controlled and easy-to-carry telescope that people can use without prior telescope experience. With sign-offs from Terry Virts and Scott Kelly, we can see that even astronauts are seeing how the future of astrophotography is being shaped by software and robotics.
Out of the box, the telescope comes with a short adjustable tripod, a USB-C cable, and an adapter. The tripod legs can be screwed on, and the magnetic charger makes it easy to power the Vespera on the go with a power bank.
The Vespera weighs around 11 pounds and is small enough to fit in most backpacks and even a smaller crossbody bag. This makes carrying it out to a park or on a hike to observe the night sky easier than most other telescopes.
The Stellaris app uses GPS to set where you’re observing, taking into account the Earth’s rotation and adjusting its autofocusing features to eliminate manual adjustments for the user. All you have to do is set up the Vespera with its tripod legs, open the app, and select what you want to observe.
The Vespera will open its telescope arm, swiveling and adjusting its angle for where to look in the sky.
Vaonis Vespera Smart Telescope – App and Use
Starting up the Vespera involves connecting to it via WiFi, and the initialization process can take around five to ten minutes to scan the sky for viable objects to observe.
With that said, we recommend you set up the Vespera in a large, open environment. A small yard with many trees is not ideal, while an open field can maximize the range of motion the telescope provides.
The Singularity app will tell you how long it’ll take to observe a certain object, and the estimate holds up well within five to ten minutes.
Depending on your surroundings, the initialization or observation can fail if there’s something blocking the telescope’s view, so some trial and error is needed to get a successful picture. Patience is required when using the Vespera since it can take a while to re-initialize and set up an observation again.
The battery is powerful enough to take on a long night of star-watching with a claimed eight hours of automation. On average, a thirty-minute observation consumed 5% of the battery, making the eight to ten-hour range plausible.
In our limited astrophotography experience, the Vespera captured crisp and satisfying images. Depending on the closeness of the celestial object, the brightness and clarity can vary.
For example, we found that the Ring Nebula was much clearer to capture than the Whirlpool Galaxy. Of course, quality may depend on the user’s environment as well.
The Ring Nebula captured on Vespera
Images can be exported in a variety of formats in a 1920×1080 resolution, including JPG, TIFF, and FITS. Being able to easily save your results to your phone is appealing if you want to share them with others.
Two modes on the Vespera especially take advantage of its ability to tirelessly take photos: mosaic mode and “Plan my Night.” Mosaic mode captures multiple snapshots of the sky and assembles them, which can take longer than a usual observation.
“Plan my Night”, however, allows the user to use Vespera to observe different objects throughout the night ahead of time. This makes it possible to study the stars in your sleep if you leave the telescope on your lawn or during a camping trip.
The “Plan my Night” feature shown on iPad.
Since it has features for automation and to be outdoors for long periods of time, the Vespera has an IP43 water resistance rating, so light splashes and rain are permissible.
We’d avoid too much exposure though, and keep watch of the weather in case you leave it outside overnight.
A step in the right direction for accessibility
The Vespera telescope has a lot of potential as a tool to make astrophotography more accessible. With applications in education and with space enthusiasts, the Vespera introduces the complexities of capturing celestial objects in an easy-to-digest way.
Following along using the Singularity app, users can learn and engage with the night sky to their heart’s content. The Vespera offers a wonderful way to embark on a personal hobby or share the experience with others.
Comparatively, a beginner telescope can start range from $100 to $500, which is a fraction of the $1,499 cost of the Vespera. The main features to pay for are the automation and ease of use, as well as the small size and portability.
Still, it’s a worthwhile investment if you want to take the leap and lack experience in handling telescopes.
Vaonis Vespera Smart Telescope Pros
Highly portable
Stellaris app simplifies setup and observation
Good battery life and water resistant
Different features to take advantage of automation
Vaonis Vespera Smart Telescope Cons
High cost may be a deterrent
Brightness and clarity of captured images can vary
Initialization and observation can fail suddenly depending on surroundings
Rating: 3.5 out of 5
Where to buy the Vaonis Vespera Smart Telescope
The Vaonis Vespera Smart Telescope is available on the Vaonis store for $1,499 (plus $90 US shipping).
Apple’s proposal would be less bulky than the existing third-party Wristcam
AppleInsider may earn an affiliate commission on purchases made through links on our site.
Apple has big plans for cameras in future Apple Watches, if they can be fitted without making the watch awkward to wear — and if the cameras can be of high enough quality.
The popularity of the Apple Watch Ultra has shown that people are willing to wear bulkier devices if there is a clear benefit to them. In the case of the Apple Watch Ultra, that benefit includes a greatly extended battery life, for instance.
Future Apple Watches may also become at least a little larger, as Apple is again looking at ways to incorporate a camera. Previously, Apple has had a patent granted to include a camera in the Apple Watch’s Digital Crown, but that’s got to have very limited use.
Now in a newly-revealed patent application, Apple is proposing that a Watch could fit a camera into a slight protrusion toward the top of its chassis, above the display. “Wearable Electronic Device Having A Digital Camera Assembly,” would be similar to the existing Wristcam in where the camera is positioned, but that product is a whole Watch band.
Apple isn’t keen on taking up a whole band, or anything that makes the Apple Watch cumbersome.
“While certain electrical components, such as a camera, may perform desirable
functions,” says the patent application, “the integration of such components may result in a bulky device which may hinder user performance, may be uncomfortable to wear, or may be unsuited for performing certain functions (e.g., a camera disposed within a wearable electronic device may be awkward to position when capturing optical input).”
Apple is also dismissive of compromises to fit the camera into a Watch band. “Additionally, low-quality components may not meet a user’s quality expectations,” it says, “(e.g., a low-quality camera may produce low-quality images).”
It’s not just that Apple wants you to look nice on photographs. Apple specifically wants a video camera with up to 4K and 60 frames per second, or a still camera up to 12MP.
Either a spacecraft, or an Apple Watch side view with a camera protrusion to the right
That’s because this is for more than photography wildlife, more than for capturing the crowd at Little League. This camera is actually meant for more than any regular photography or video.
“The digital camera assembly may be used for a variety of purposes,” continues Apple, “including, as non-limiting examples, facial identification, fingerprint sensing, scanning a Quick Response (QR) code, video conferencing, biometric monitoring (e.g., heart rate monitoring), photography, video or image capture, or any combination thereof.”
So with a camera on your Apple Watch, you could unlock all of your Apple devices through Face ID.
The Watch could also use its camera to “capture movement of a user’s body or other objects during certain activities.” Using visual inertial odometry (VIO), “the camera can be used to obtain a high degree of motion sensing accuracy, which may be used to monitor, detect, and/or predict a user’s motion or gesture based on certain characteristics.”
That’s a lot to demand of a camera, and Apple is not expecting to be able to fit one under the screen of an Apple Watch. Instead, it will be on what Apple calls a protusion, and much of the patent application is about how to do that without making the Watch distracting to wear.
“[A] digital camera assembly may be integrated into the wearable electronic device in a way so as to minimize an effect of the digital camera assembly on other electronic components and/or a form factor of the wearable electronic device,” says Apple.
“For example, in implementations where a digital camera assembly is positioned within an internal cavity (e.g., camera cavity) of a protrusion,” it continues, “the digital camera assembly may extend from the housing, over a band slot, and away from a display, a battery, a circuit assembly, or sensors of the wearable electronic device.”
“Likewise, the protrusion may be shaped to avoid interfering with geometry of the band slot,” says the patent application, “so that a band/strap may still be permitted to couple with the housing of the wearable electronic device.”
The patent application is credited to five inventors, including Christopher M. Warner, whose previous work includes muscle-sensing Apple Watch bands.
Apple’s computational photography aims for realism
AppleInsider may earn an affiliate commission on purchases made through links on our site.
A controversy with Samsung’s phone cameras has renewed the conversation surrounding computational photography, and highlights the difference between it, and Apple’s approach in iOS.
It isn’t a big secret that Apple relies upon advanced algorithms and computational photography for nearly all of its iPhone camera features. However, users are beginning to ask where to draw the line between these algorithms and something more intrusive, like post-capture pixel alteration.
In this piece, we will examine the controversy surrounding Samsung’s moon photos, how the company addresses computational photography, and what this means for Apple and its competitors going forward.
Computational photography
Computational photography isn’t a new concept. It became necessary as people wanted more performance from their tiny smartphone cameras.
The basic idea is that computers can perform billions of operations in a moment, like after a camera shutter press, to replace the need for basic edits or apply more advanced corrections. The more we can program the computer to do after the shutter press, the better the photo can be.
This started with Apple’s dual camera system on iPhone 7. Other photographic innovations before then, like Live Photos, could be considered computational photography, but Portrait Mode was the turning point for Apple.
Apple introduced Portrait Mode in 2016, which took depth data from the two cameras on the iPhone 7 Plus to create an artificial bokeh. The company claimed it was possible thanks to the dual camera system and advanced image signal processor, which conducted 100 billion operations per photo.
Needless to say, this wasn’t perfect, but it was a step into the future of photography. Camera technology would continue to adapt to the smartphone form factor, chips would get faster, and image sensors would get more powerful per square inch.
Portrait mode uses computational photography to separate the foreground
In 2023, it isn’t unheard of to shoot cinematically blurred video using advanced computation engines with mixed results. Computational photography is everywhere, from the Photonic Engine to Photographic Styles — an algorithm processes every photo taken on iPhone. Yes, even ProRAW.
This was all necessitated by people’s desire to capture their life with the device they had on hand — their iPhone. Dedicated cameras have physics on their side with large sensors and giant lenses, but the average person doesn’t want to spend hundreds or thousands of dollars on a dedicated rig.
So, computational photography has stepped in to enhance what smartphones’ tiny sensors can do. Advanced algorithms built on large databases inform the image signal processor how to capture the ideal image, process noise, and expose a subject.
However, there is a big difference between using computational photography to enhance the camera’s capabilities and altering an image based on data that the sensor never captured.
Samsung’s moonshot
To be clear: Apple is using machine learning models — or “AI, Artificial Intelligence” for those using the poorly coined popular new buzzword — for computational photography. The algorithms provide information about controlling multi-image captures to produce the best results or create depth-of-field profiles.
The image processor analyzes skin tone, skies, plants, pets, and more to provide proper coloration and exposure, not pixel replacement. It isn’t looking for objects, like the moon, to provide specific enhancements based on information outside of the camera sensor.
We’re pointing this out because those debating Samsung’s moon photos have used Apple’s computational photography as an example of how other companies perform these photographic alterations. That simply isn’t the case.
Samsung’s moon algorithm in action. Credit: u/ibreakphotos on Reddit
Samsung has documented how Samsung phones, since the Galaxy S10, have processed images using object recognition and alteration. The Scene Optimizer began recognizing the moon with the Galaxy S21.
As the recently-published document describes, “AI” recognizes the moon through learned data, and the detail improvement engine function is applied to make the photo clearer with multi-frame synthesis and machine learning.
Basically, Samsung devices will recognize an unobscured moon and then use other high-resolution images and data about the moon to synthesize a better output. The result isn’t an image captured by the device’s camera but something new and fabricated.
Overall, this system is clever because the moon looks the same no matter where it is viewed on earth. The only thing that changes is the color of the light reflected from its surface and the phase of the moon itself. Enhancing the moon in a photo will always be a straightforward calculation.
Both Samsung and Apple devices take a multi-photo exposure for advanced computations. Both analyze multiple captured images for the best portion of each and fuse them into one superior image. However, Samsung adds an additional step for recognized objects like the moon, which introduces new data from other high-resolution moon images to correct the moon in the final captured image.
This isn’t necessarily a bad thing. It just isn’t something Samsung hasn’t made clear in its advertising or product marketing, which may lead to customer confusion.
The problem with this process, and the reason a debate exists, is how this affects the future of photography.
Long story short, the final image doesn’t represent what the sensor detected and the algorithm processed. It represents an idealized version of what might be possible but isn’t because the camera sensor and lens are too small.
The impending battle for realism
From our point of view, the key tenet of iPhone photography has always been realism and accuracy. If there is a perfect middle in saturation, sharpness, and exposure, Apple has trended close to center over the past decade, even if it hasn’t always remained perfectly consistent.
We acknowledge that photography is incredibly subjective, but it seems that Android photography, namely Samsung, has leaned away from realism. Again, not necessarily a negative, but an opinionated choice made by Samsung that customers have to address.
For the matter of this discussion, Samsung and Pixel devices have slowly tilted away from that ideal realistic representational center. They are vying for more saturation, sharpness, or day-like exposure at night.
The example above shows how the Galaxy S22 Ultra favored more exposure and saturation, which led to a loss of detail. Innocent and opinionated choices, but the iPhone 13 Pro, in this case, goes home with a more detailed photo that can be edited later.
This difference in how photos are captured is set in the opinionated algorithms used by each device. As these algorithms advance, future photography decisions could lead to more opinionated choices that cannot be reversed later.
For example, by changing how the moon appears using advanced algorithms without alerting the user, that image is forever altered to fit what Samsung thinks is ideal. Sure, if users know to turn the feature off, they could, but they likely won’t.
We’re excited about the future of photography, but as photography enthusiasts, we hope it isn’t so invisible. Like Apple’s Portrait Mode, Live Photos, and other processing techniques — make it opt-in with obvious toggles. Also, make it reversible.
Tapping the shutter in a device’s main camera app should take a representative photo of what the sensor sees. If the user wants more, let them choose to add it via toggles before or editing after.
For now, try taking photos of the night sky with nothing but your iPhone and a tripod. It works.
Why this matters
It is important to stress that there isn’t any problem with replacing the ugly glowing ball in the sky with a proper moon, nor is there a problem with removing people or garbage (or garbage people) from a photo. However, it needs to be a controllable, toggle-able, and visible process to the user.
Computational photography is the future, for better or worse
As algorithms advance, we will see more idealized and processed images from Android smartphones. The worst offenders will outright remove or replace objects without notice.
Apple will inevitably improve its on-device image processing and algorithms. But, based on how the company has approached photography so far, we expect it will do so with respect to the user’s desire for realism.
Tribalism in the tech community has always caused debates to break out among users. Those have included Mac or PC, iPhone or Android, and soon, real or ideal photos.
We hope Apple continues to choose realism and user control over photos going forward. Giving a company complete opinionated control over what the user captures in a camera, down to altering images to match an ideal, doesn’t seem like a future we want to be a part of.
I struggle to look at any other Samsung smartphone now that I’ve been living with its foldables. The Samsung Galaxy Z Fold 4 has effectively changed how I use Android. Most of the time, I’ll only bother with my Google Pixel 7 if someone is calling the number linked to that phone. Otherwise, you’ll see me primarily on the foldable. It’s just so much more versatile for the life I lead.
That’s not to say I didn’t enjoy my time with the Samsung Galaxy S23 Ultra, but I missed the Fold while reviewing this one. Samsung’s ultimate new flagship device is everything you could want in a smartphone, but there is also a lot here that feels like overkill now that we’re in the second iteration of the Ultra and its stylus-wielding ways. In fact, I forgot to use the stylus until about two days ago (I don’t draw). And while four cameras are a great back-of-the-box brag, I still don’t understand how to push them to the extent they’ve been marketed as being capable of, and I realize I probably never will. And I like high spec phones!
Regardless, the Ultra still has plenty going for it, including a better design than the last generation. Those rear-facing cameras may not be enough to justify the price to casual users, but their post processing algorithms are just as good as Google’s—better in some cases. The Ultra even has a few features I think foldables are still missing—like that stowable stylus.
But when it comes to targeting genuine innovation as opposed to niche specialty features, the Ultra might miss the mark compared to both the competition and Samsung’s other phones.
The best Ultra yet
If you like big phones, you’ll love the Galaxy S23 Ultra (I don’t—it’s not foldable). It has a 6.8-inch Dynamic AMOLED display, categorized as such because it’s based on tech that allows the display to dynamically change refresh rates without killing the battery. The jury is still out on how much battery that display tech saves, and I’ll get more into that when we talk about the battery rundown results later. Still, the display that Samsung has going here is like carrying a tiny version of its TVs in your pocket.
You might have gotten into the Galaxy line because you love Samsung’s displays. I can’t blame you. Like on the S22 Ultra, the screen on the S23 Ultra is a 1440p resolution with a 120Hz refresh rate. I love watching TV on this thing, even the 720p classics like Taxi and One Day at a Time. What I especially appreciate about Samsung is how low the brightness can go so that I can fall asleep to those shows at the end of the night without lighting up the room. Samsung enables the use of Android 12’s extra dim mode, and with that turned on, the phone doesn’t go any higher than about 350 nits—the standard rate is around 430 nits, or a whopping 1,750 nits if you’re out in direct sunlight and using the adaptive brightness feature.
The best part about the new Galaxy S23 Ultra is that Samsung fixed some of what I didn’t like with the Galaxy S22 Ultra’s design. Mainly, it squared off the edges instead of rounding them, so it’s easy to cradle the phone one-handed. I finally felt confident that I wasn’t going to drop it. I’m glad Samsung stopped with the overtly rounded edges, which are also annoying to use when you’re tapping on the edge of the screen.
This is still a gigantic smartphone. I hope you have big hands if you plan to play games on this thing. My small hands and long claws had difficulty cradling the Ultra to play with on-screen controls in games like Dreamlight Valley through Xbox Game Pass, and my wrists got weary holding the phone to control my character in Riptide GP: Renegade. The first-gen Razer Kishi controller that I use for Android gaming also feels as if it’s stretched to capacity on this phone, as if the Galaxy S23 Ultra will pop out at any minute. Unless it’s a point-and-tap game, I use a Bluetooth controller to play games on the S23 Ultra. The OnePlus 11’s similarly sizeable 6.7-inch display, comparatively, feels less ginormous because it doesn’t have the Ultra’s squared-off corners and the chassis is narrower.
The Galaxy S23 Ultra utilizes an in-display fingerprint sensor and face unlock for added lock screen security. It’s best that Samsung didn’t carry over the power button fingerprint sensor like on the Z Fold 4, because I am constantly accidentally pressing that one and locking myself out of it. Scanning in a fingerprint or smiling at the Ultra felt fast and responsive unless I wore a mask or sunglasses.
The default sorage space on the S23 Ultra has thankfully been bumped up to 256GB. It starts there and goes all the way up to 1TB, if you can stomach paying for it (doing so will add $420 on top of the base storage’s cost). The Ultra is also IP68 rated for water and dust resistance.
Qualcomm with Samsung flavoring
Something to note about this year’s Galaxy S23 lineup is that it runs a unique flavor of the Qualcomm Snapdragon 8 Gen 2 processor. Rather than use the one that came right out of the box, Samsung infused some of its AI smarts to tune camera and performance algorithms to its liking. The company already does this to some effect with its Exynos chips overseas, and it’s bringing that expertise to the phones sold in the states to one-up Google’s homemade Tensor processor. Sometimes it works.
The Galaxy S23 Ultra is available with 8GB and 12GB of RAM, which seems absurd. The Ultra should have 12GB of memory as the standard, since it’s technically the ultimate Samsung phone. Even with the 12GB of RAM, you can’t tell that the chip inside the Galaxy S23 Ultra is any beefier than what’s inside the similarly-specced OnePlus 11. On paper, and in Geekbench 5 (which will be Geekbench 6 in our reviews going forward), the Galaxy S23 Ultra performed better than OnePlus 11 by only about 300 points on the single-core score and 400 points on the multi-core one. But that proves little about whether Samsung’s infused chip is faster or more able than OnePlus’s vanilla one in actual use. Considering the Google Pixel 7 Pro is a laughing stock on the benchmark charts but not in real-world use—it ranks with 400 points less than the Galaxy S23 Ultra—it’s hard to use these benchmarks as the sole test for what’s possible. Anyway, neither of these Android devices can hold a candle to the numbers that Apple’s A16 Bionic spits out.
The upside to having such a powerful smartphone is that it can do everything: play games locally and from the cloud, create and edit documents, quickly export edited videos, process RAW photos, and chat with whoever. The Ultra can handle each of these cases with absolute ease, but that’s expected from a phone that I’ve been running for about three weeks. The real test for these devices is how they do after a year in the hand.
I echo the sentiments of a few other reviews: the Galaxy S23 Ultra doesn’t get as hot as previous versions of the device or even other Android phones. I fell asleep next to it a few nights in a row while it was charging and playing Pluto TV, and I didn’t feel the usual heat emanating as the battery fueled up for the next day. It did get toasty once while I was mindlessly scrolling through TikTok (as I often do), and it was significant enough that I remember saying, “I should probably mention this in the review.”
Apple’s iPhone 14 Pro Max lasts longer
I’m sorry to include Apple in the subhead of a Samsung Galaxy review. But I remain impressed by the battery test on Apple’s latest flagship, and it’s now the benchmark for every other flagship phone review.
Samsung’s 5,000 mAh battery is enormous while remaining the same size as in last year’s Ultra. Whatever Samsung did on the backend to extend battery life has worked thus far—the S23 Ultra beat out the S22 Ultra by about two hours, lasting 18 hours and 33 minutes. But that’s nothing to Apple’s nearly 24-hour battery life on its large iPhone 14 Pro Max. I want some of whatever magic Apple has going on with its software to come to Android land.
These results translated to using the phone daily, too. As I mentioned, I’m a TikTok freak, and I was surprised to see that the Ultra chewed through only 23% of its battery life in five hours after mixed-use, which included tuning into my Disney streamer.
Move over, Pixel camera
Because the Galaxy S23 Ultra is being dubbed as “ultimate,” its cameras are appropriately extreme. They’re also the key upgrade point here, and took up the majority of Samsung’s announcement event for this phone. The primary camera is a 200-MP standard wide-angle lens with optical image stabilization (OIS) and an f/1.7 aperture. The ultra-wide camera is a 12-MP sensor with an f/2.2 aperture. And the two telephoto lenses on the back also have OIS, though one has an f/2.4 aperture with a maximum 3x optical zoom, and the other is f/4.9 with a 10x optical zoom. The maximum digital zoom for this camera is 100x, just like the S22 Ultra.
Whenever someone outside of the Android bubble realizes the Galaxy S23 Ultra has four cameras on the back, they often ask me, “why?” The answer is so it has camera lenses for every foreseeable situation. For instance, if you’re chasing your kid around the park, you want that quick 3x optical zoom to capture them in the frame and up close. The result is a background bokeh effect that helps make the image instantly shareable on Instagram without using Portrait mode. Or if you happen to be lying down at the park, only to hear the roar of a jet engine approaching overhead, you can use the 10x optical zoom to get a closer look and maybe even post it to TikTok. For epic sky days, when the clouds seem to be cruising through as if they’re fresh cotton candy spun right out of the bin, the ultra wide-angle camera helps increase the drama when shared in your secret Slack channel of friends obsessed with sunsets.
Nowadays, most smartphone cameras are capable of everything I just described, but Samsung purports a higher resolution and greater color and distance detail. These are the cameras we have on us every day, and Samsung argues that these are the digital memories we’ll be pulling from as we struggle to remember our lives someday in the future.
That’s not to say that every photo the Galaxy S23 Ultra produces is perfect. Zooming past the 10x optical limit requires praying that the image won’t be jaggy or over-sharpened. There were so many instances on the evening of my daughter’s third birthday that the pictures of her punching around a balloon came out looking blurry—a real bummer for me as I was trying to find a cute one to share within group chats. I also tried staying up one night to capture the Air Force flying their planes in the sky above, and I could not produce anything worth sharing.
As it stands, the 200-MP sensor on the Galaxy S23 Ultra isn’t shooting in its full resolution at all times. Like most flagship smartphones, including the iPhone 14 Pro and Google Pixel 7, Samsung uses pixel-binning, so the phone shoots like a 12-MP camera with 16 pixels within each megapixel. The result is brighter photos throughout with better detail. I preferred the 12-MP images worked over by the algorithm over the full 200-MP raw ones, which usually require some post-editing, anyway. I want to avoid editing a photo while just trying to share it on social media.
You can see more clearly how the Galayx S23 Ultra’s post-processing stacks up compared to the iPhone 14 Pro Max and Pixel 7 Pro in the slideshow I put together here. For the most part, I found Samsung’s algorithms to veer towards being saturated, though it was impressive at tempering the final product to maintain detail where it mattered. The most obvious example is a photo where I shot the Santa Ynez Mountains in Santa Barbara; the S23 Ultra held on to the subtle detail of the sunset, lighting up the ridges without over-contrasting them.
I wrote more about Expert RAW in the other piece, including Samsung’s improved astrophotography feature. I wish that Samsung would have extracted this feature on its own rather than buried it inside another download that has to be enabled in the camera app before anyone knows it’s even there. Samsung includes all these unique camera features as if we’re supposed to know how to use them right out of the box. But as with the improved nighttime video recording capabilities teased during the Ultra’s debut at Galaxy Unpacked earlier this month, I had no idea where to start. Just because a smartphone can do all these fancy things doesn’t mean that the general population will aspire to that. And after ten years of reviewing smartphones, I might also give up.
That’s a big problem, as the camera system here is a major selling point and a major justification for the price tag. Compare that to Apple, which due to making both the iPhone and iOS, is able to bundle its phones with tons of everyday usability conveniences.
Before we move on from the cameras, there are a few other things to note: video recording on this smartphone is aces, even without a tripod. But for stability’s sake, I’ve been propping the Ultra up on a handheld tripod and following my kid around at 60 fps. The video is so smooth! The Ultra maxes out at 30 frames per second in 8K resolution for video recording, and there’s a Pro Video mode if you’re comfortable with tweaking camera settings. The front-facing camera is a 12-MP sensor with an f/2.2 aperture; annoyingly, it doesn’t zoom in or out.
Does a smartphone need a stylus?
Samsung’s S Pen has been around for a long while. It’s as iconic as Paris Hilton’s chihuahuas in the 2000s (RIP to them all). Last year’s Ultra was the first time it appeared in the regular Galaxy lineup after the sunsetting of the Galaxy Note series of yore. But functionally, it’s similar to what the S Pen could do before it. You can pop it out for drawing and cropping when the situation on screen calls for it—accommodating for business people doing precise things, like needing to move a cursor within a document or having to sign off on a contract while in line somewhere. But I’m starting to realize this screen is too limited for anything art driven. Granted, I’m not an artist, but if I imagine myself as a college student (again), the S Pen would feel much more appropriate docked inside a gadget like the Z Fold 4, with can open up into a larger display that’s fit for highlighting and making digital notes. That’s a form factor that lends itself to a stylus rather than the cramped screen on the S23 Ultra.
The other problem with the S Pen is that it requires its own space inside the chassis to dock. That’s the tradeoff for a phone slightly too big for your pocket or those straddling gaming controllers. As much as the S Pen is an iconic tool, I don’t know that it belongs on a smartphone anymore, even if you can use it as a Bluetooth controller.
Samsung’s version of Android
The Galaxy S23 Ultra ships with One UI 5, based on the latest version of Android 13. The One UI 5.1 update is the one that everyone’s waiting for right now, since it includes features like Bixby Text Calling, which works similarly to the Pixel’s Screen Calling. This feature is now live in English (it was available only in Korea until now), but I couldn’t get it to work during my testing period. I hope to revisit this and some of Bixby’s other features later, as I’m curious to understand the benefits of sticking with it over the tried-and-true (even if sometimes frustrating) Google Assistant.
I don’t mind Samsung’s version of Android, especially not since adopting the foldable. I realized it comes with the benefit of Samsung tweaking what Google gave it to its devices, even if it doesn’t have any semblance of Android’s interface framework, called Material You. Samsung offers some neat integration with Microsoft’s Your Phone app on Windows PCs that’s beyond the default experience, including the ability to control your device from the desktop remotely. There’s also the ability to snap a photo in Expert RAW and have it immediately populate in Adobe Lightroom. These abilities are nice to have, but like the Galaxy S22 Ultra last year, I hardly ever considered using them after the review period was over. They’re not a reason to go out and buy a phone.
Still too much phone
I know there are people out there salivating over the Galaxy S23 Ultra. They want the best that Samsung has to offer in its lineup, whether it’s for bragging rights or because they want all those lenses and this is the only camera they’ll own. I get all that, but I still think the Ultra is a bit of overkill in a market where we’re all screaming for a deal. There are still two other models of the Galaxy S23 that I have yet to review, and though they’re smaller devices with slightly different chassis, they more or less deliver the same Samsung experience across the board for less. They’re priced a little over the Pixel 7 lineup, starting at $800 and $1,000 for the S23 and S23+, respectively.
If you’re going to spend a starting price of $1,200 on any Android smartphone, I’m pleading with you to get a foldable instead. Yes, it’s a new kind of form factor with dubious longevity, but it’s not going away any time soon. For many, even those who want the best, camera fidelity will reach a point diminishing returns. But a foldable drastically changes every user’s experience. There is more competition cropping up overseas and the rumor mill is getting louder as more manfacturers are hopping on board this new smartphone fad. At the very least, if you’re spending a whopping amount of money on a smartphone, get something that’s a bonafide phone and a tablet for the price.
Better and better cameras are perhaps not what each new generation of a phone should be targeting, at least anymore.