Vespera Smart Telescope review: specs, performance, cost

[ad_1]


The Vaonis Vespera smart telescope makes it easier than ever to observe the night sky with your iPhone, but at a steep cost.

The Vaonis Vespera telescope brings a sense of automation to astrophotography as an app-controlled and easy-to-carry telescope that people can use without prior telescope experience. With sign-offs from Terry Virts and Scott Kelly, we can see that even astronauts are seeing how the future of astrophotography is being shaped by software and robotics.

Out of the box, the telescope comes with a short adjustable tripod, a USB-C cable, and an adapter. The tripod legs can be screwed on, and the magnetic charger makes it easy to power the Vespera on the go with a power bank.

The Vespera weighs around 11 pounds and is small enough to fit in most backpacks and even a smaller crossbody bag. This makes carrying it out to a park or on a hike to observe the night sky easier than most other telescopes.

The Vespera is a snug fit in a medium-sized Jansport crossbody.

The Stellaris app uses GPS to set where you’re observing, taking into account the Earth’s rotation and adjusting its autofocusing features to eliminate manual adjustments for the user. All you have to do is set up the Vespera with its tripod legs, open the app, and select what you want to observe.

The Vespera will open its telescope arm, swiveling and adjusting its angle for where to look in the sky.

Vaonis Vespera Smart Telescope – App and Use

Starting up the Vespera involves connecting to it via WiFi, and the initialization process can take around five to ten minutes to scan the sky for viable objects to observe.

With that said, we recommend you set up the Vespera in a large, open environment. A small yard with many trees is not ideal, while an open field can maximize the range of motion the telescope provides.

The Singularity app will tell you how long it’ll take to observe a certain object, and the estimate holds up well within five to ten minutes.

Depending on your surroundings, the initialization or observation can fail if there’s something blocking the telescope’s view, so some trial and error is needed to get a successful picture. Patience is required when using the Vespera since it can take a while to re-initialize and set up an observation again.

The battery is powerful enough to take on a long night of star-watching with a claimed eight hours of automation. On average, a thirty-minute observation consumed 5% of the battery, making the eight to ten-hour range plausible.

In our limited astrophotography experience, the Vespera captured crisp and satisfying images. Depending on the closeness of the celestial object, the brightness and clarity can vary.

For example, we found that the Ring Nebula was much clearer to capture than the Whirlpool Galaxy. Of course, quality may depend on the user’s environment as well.

The Ring Nebula captured on Vespera

The Ring Nebula captured on Vespera

Images can be exported in a variety of formats in a 1920×1080 resolution, including JPG, TIFF, and FITS. Being able to easily save your results to your phone is appealing if you want to share them with others.

Two modes on the Vespera especially take advantage of its ability to tirelessly take photos: mosaic mode and “Plan my Night.” Mosaic mode captures multiple snapshots of the sky and assembles them, which can take longer than a usual observation.

“Plan my Night”, however, allows the user to use Vespera to observe different objects throughout the night ahead of time. This makes it possible to study the stars in your sleep if you leave the telescope on your lawn or during a camping trip.

The

The “Plan my Night” feature shown on iPad.

Since it has features for automation and to be outdoors for long periods of time, the Vespera has an IP43 water resistance rating, so light splashes and rain are permissible.

We’d avoid too much exposure though, and keep watch of the weather in case you leave it outside overnight.

A step in the right direction for accessibility

The Vespera telescope has a lot of potential as a tool to make astrophotography more accessible. With applications in education and with space enthusiasts, the Vespera introduces the complexities of capturing celestial objects in an easy-to-digest way.

Following along using the Singularity app, users can learn and engage with the night sky to their heart’s content. The Vespera offers a wonderful way to embark on a personal hobby or share the experience with others.

Comparatively, a beginner telescope can start range from $100 to $500, which is a fraction of the $1,499 cost of the Vespera. The main features to pay for are the automation and ease of use, as well as the small size and portability.

Still, it’s a worthwhile investment if you want to take the leap and lack experience in handling telescopes.

Vaonis Vespera Smart Telescope Pros

  • Highly portable
  • Stellaris app simplifies setup and observation
  • Good battery life and water resistant
  • Different features to take advantage of automation

Vaonis Vespera Smart Telescope Cons

  • High cost may be a deterrent
  • Brightness and clarity of captured images can vary
  • Initialization and observation can fail suddenly depending on surroundings

Rating: 3.5 out of 5

Where to buy the Vaonis Vespera Smart Telescope

The Vaonis Vespera Smart Telescope is available on the Vaonis store for $1,499 (plus $90 US shipping).

[ad_2]

Future Apple Watch could get cameras for photography & Face ID

[ad_1]

Apple’s proposal would be less bulky than the existing third-party Wristcam



AppleInsider may earn an affiliate commission on purchases made through links on our site.

Apple has big plans for cameras in future Apple Watches, if they can be fitted without making the watch awkward to wear — and if the cameras can be of high enough quality.

The popularity of the Apple Watch Ultra has shown that people are willing to wear bulkier devices if there is a clear benefit to them. In the case of the Apple Watch Ultra, that benefit includes a greatly extended battery life, for instance.

Future Apple Watches may also become at least a little larger, as Apple is again looking at ways to incorporate a camera. Previously, Apple has had a patent granted to include a camera in the Apple Watch’s Digital Crown, but that’s got to have very limited use.

Now in a newly-revealed patent application, Apple is proposing that a Watch could fit a camera into a slight protrusion toward the top of its chassis, above the display. “Wearable Electronic Device Having A Digital Camera Assembly,” would be similar to the existing Wristcam in where the camera is positioned, but that product is a whole Watch band.

Apple isn’t keen on taking up a whole band, or anything that makes the Apple Watch cumbersome.

“While certain electrical components, such as a camera, may perform desirable

functions,” says the patent application, “the integration of such components may result in a bulky device which may hinder user performance, may be uncomfortable to wear, or may be unsuited for performing certain functions (e.g., a camera disposed within a wearable electronic device may be awkward to position when capturing optical input).”

Apple is also dismissive of compromises to fit the camera into a Watch band. “Additionally, low-quality components may not meet a user’s quality expectations,” it says, “(e.g., a low-quality camera may produce low-quality images).”

It’s not just that Apple wants you to look nice on photographs. Apple specifically wants a video camera with up to 4K and 60 frames per second, or a still camera up to 12MP.

Either a spacecraft, or an Apple Watch side view with a camera protrusion to the right

Either a spacecraft, or an Apple Watch side view with a camera protrusion to the right

That’s because this is for more than photography wildlife, more than for capturing the crowd at Little League. This camera is actually meant for more than any regular photography or video.

“The digital camera assembly may be used for a variety of purposes,” continues Apple, “including, as non-limiting examples, facial identification, fingerprint sensing, scanning a Quick Response (QR) code, video conferencing, biometric monitoring (e.g., heart rate monitoring), photography, video or image capture, or any combination thereof.”

So with a camera on your Apple Watch, you could unlock all of your Apple devices through Face ID.

The Watch could also use its camera to “capture movement of a user’s body or other objects during certain activities.” Using visual inertial odometry (VIO), “the camera can be used to obtain a high degree of motion sensing accuracy, which may be used to monitor, detect, and/or predict a user’s motion or gesture based on certain characteristics.”

That’s a lot to demand of a camera, and Apple is not expecting to be able to fit one under the screen of an Apple Watch. Instead, it will be on what Apple calls a protusion, and much of the patent application is about how to do that without making the Watch distracting to wear.

“[A] digital camera assembly may be integrated into the wearable electronic device in a way so as to minimize an effect of the digital camera assembly on other electronic components and/or a form factor of the wearable electronic device,” says Apple.

“For example, in implementations where a digital camera assembly is positioned within an internal cavity (e.g., camera cavity) of a protrusion,” it continues, “the digital camera assembly may extend from the housing, over a band slot, and away from a display, a battery, a circuit assembly, or sensors of the wearable electronic device.”

“Likewise, the protrusion may be shaped to avoid interfering with geometry of the band slot,” says the patent application, “so that a band/strap may still be permitted to couple with the housing of the wearable electronic device.”

The patent application is credited to five inventors, including Christopher M. Warner, whose previous work includes muscle-sensing Apple Watch bands.

[ad_2]

iPhone vs Android: Two different photography and machine learning approaches

[ad_1]

Apple’s computational photography aims for realism



AppleInsider may earn an affiliate commission on purchases made through links on our site.

A controversy with Samsung’s phone cameras has renewed the conversation surrounding computational photography, and highlights the difference between it, and Apple’s approach in iOS.

It isn’t a big secret that Apple relies upon advanced algorithms and computational photography for nearly all of its iPhone camera features. However, users are beginning to ask where to draw the line between these algorithms and something more intrusive, like post-capture pixel alteration.

In this piece, we will examine the controversy surrounding Samsung’s moon photos, how the company addresses computational photography, and what this means for Apple and its competitors going forward.

Computational photography

Computational photography isn’t a new concept. It became necessary as people wanted more performance from their tiny smartphone cameras.

The basic idea is that computers can perform billions of operations in a moment, like after a camera shutter press, to replace the need for basic edits or apply more advanced corrections. The more we can program the computer to do after the shutter press, the better the photo can be.

This started with Apple’s dual camera system on iPhone 7. Other photographic innovations before then, like Live Photos, could be considered computational photography, but Portrait Mode was the turning point for Apple.

Apple introduced Portrait Mode in 2016, which took depth data from the two cameras on the iPhone 7 Plus to create an artificial bokeh. The company claimed it was possible thanks to the dual camera system and advanced image signal processor, which conducted 100 billion operations per photo.

Needless to say, this wasn’t perfect, but it was a step into the future of photography. Camera technology would continue to adapt to the smartphone form factor, chips would get faster, and image sensors would get more powerful per square inch.

Portrait mode uses computational photography to separate the foreground

Portrait mode uses computational photography to separate the foreground

In 2023, it isn’t unheard of to shoot cinematically blurred video using advanced computation engines with mixed results. Computational photography is everywhere, from the Photonic Engine to Photographic Styles — an algorithm processes every photo taken on iPhone. Yes, even ProRAW.

This was all necessitated by people’s desire to capture their life with the device they had on hand — their iPhone. Dedicated cameras have physics on their side with large sensors and giant lenses, but the average person doesn’t want to spend hundreds or thousands of dollars on a dedicated rig.

So, computational photography has stepped in to enhance what smartphones’ tiny sensors can do. Advanced algorithms built on large databases inform the image signal processor how to capture the ideal image, process noise, and expose a subject.

However, there is a big difference between using computational photography to enhance the camera’s capabilities and altering an image based on data that the sensor never captured.

Samsung’s moonshot

To be clear: Apple is using machine learning models — or “AI, Artificial Intelligence” for those using the poorly coined popular new buzzword — for computational photography. The algorithms provide information about controlling multi-image captures to produce the best results or create depth-of-field profiles.

The image processor analyzes skin tone, skies, plants, pets, and more to provide proper coloration and exposure, not pixel replacement. It isn’t looking for objects, like the moon, to provide specific enhancements based on information outside of the camera sensor.

We’re pointing this out because those debating Samsung’s moon photos have used Apple’s computational photography as an example of how other companies perform these photographic alterations. That simply isn’t the case.

Samsung's moon algorithm in action. Credit: u/ibreakphotos on Reddit

Samsung’s moon algorithm in action. Credit: u/ibreakphotos on Reddit

Samsung has documented how Samsung phones, since the Galaxy S10, have processed images using object recognition and alteration. The Scene Optimizer began recognizing the moon with the Galaxy S21.

As the recently-published document describes, “AI” recognizes the moon through learned data, and the detail improvement engine function is applied to make the photo clearer with multi-frame synthesis and machine learning.

Basically, Samsung devices will recognize an unobscured moon and then use other high-resolution images and data about the moon to synthesize a better output. The result isn’t an image captured by the device’s camera but something new and fabricated.

Overall, this system is clever because the moon looks the same no matter where it is viewed on earth. The only thing that changes is the color of the light reflected from its surface and the phase of the moon itself. Enhancing the moon in a photo will always be a straightforward calculation.

Both Samsung and Apple devices take a multi-photo exposure for advanced computations. Both analyze multiple captured images for the best portion of each and fuse them into one superior image. However, Samsung adds an additional step for recognized objects like the moon, which introduces new data from other high-resolution moon images to correct the moon in the final captured image.

Samsung's moon algorithm explained. Credit: Samsung

Samsung’s moon algorithm explained. Credit: Samsung

This isn’t necessarily a bad thing. It just isn’t something Samsung hasn’t made clear in its advertising or product marketing, which may lead to customer confusion.

The problem with this process, and the reason a debate exists, is how this affects the future of photography.

Long story short, the final image doesn’t represent what the sensor detected and the algorithm processed. It represents an idealized version of what might be possible but isn’t because the camera sensor and lens are too small.

The impending battle for realism

From our point of view, the key tenet of iPhone photography has always been realism and accuracy. If there is a perfect middle in saturation, sharpness, and exposure, Apple has trended close to center over the past decade, even if it hasn’t always remained perfectly consistent.

We acknowledge that photography is incredibly subjective, but it seems that Android photography, namely Samsung, has leaned away from realism. Again, not necessarily a negative, but an opinionated choice made by Samsung that customers have to address.

For the matter of this discussion, Samsung and Pixel devices have slowly tilted away from that ideal realistic representational center. They are vying for more saturation, sharpness, or day-like exposure at night.

The example above shows how the Galaxy S22 Ultra favored more exposure and saturation, which led to a loss of detail. Innocent and opinionated choices, but the iPhone 13 Pro, in this case, goes home with a more detailed photo that can be edited later.

This difference in how photos are captured is set in the opinionated algorithms used by each device. As these algorithms advance, future photography decisions could lead to more opinionated choices that cannot be reversed later.

For example, by changing how the moon appears using advanced algorithms without alerting the user, that image is forever altered to fit what Samsung thinks is ideal. Sure, if users know to turn the feature off, they could, but they likely won’t.

We’re excited about the future of photography, but as photography enthusiasts, we hope it isn’t so invisible. Like Apple’s Portrait Mode, Live Photos, and other processing techniques — make it opt-in with obvious toggles. Also, make it reversible.

Tapping the shutter in a device’s main camera app should take a representative photo of what the sensor sees. If the user wants more, let them choose to add it via toggles before or editing after.

For now, try taking photos of the night sky with nothing but your iPhone and a tripod. It works.

Why this matters

It is important to stress that there isn’t any problem with replacing the ugly glowing ball in the sky with a proper moon, nor is there a problem with removing people or garbage (or garbage people) from a photo. However, it needs to be a controllable, toggle-able, and visible process to the user.

Computational photography is the future, for better or worse

Computational photography is the future, for better or worse

As algorithms advance, we will see more idealized and processed images from Android smartphones. The worst offenders will outright remove or replace objects without notice.

Apple will inevitably improve its on-device image processing and algorithms. But, based on how the company has approached photography so far, we expect it will do so with respect to the user’s desire for realism.

Tribalism in the tech community has always caused debates to break out among users. Those have included Mac or PC, iPhone or Android, and soon, real or ideal photos.

We hope Apple continues to choose realism and user control over photos going forward. Giving a company complete opinionated control over what the user captures in a camera, down to altering images to match an ideal, doesn’t seem like a future we want to be a part of.

[ad_2]