Captures are done
with Red, Green and Blue filters in a monochrome camera with a sequence on Mars
as:
60sec in Red
60sec in Blue
60sec in Green
60sec in Infra-Red
742
Typically I shoot R
and B 1st because as I can make a false green if something bad were to happen.
Each 60second
capture is stacked individually with AutoStakkert (https://www.autostakkert.com/)
Sample Red
information:
ZWO
ASI290MM
------------------------------------
2020_09_23
04:21:39.450 UT
duration 60.002s
integr. 38.6s
------------------------------------
400x408
exp 3.000ms
gain
200 (33%)
frames 12863
------------------------------------
FireCapture
v2.6 beta Settings
------------------------------------
Camera=ZWO
ASI290MM
Filter=R
Profile=Mars
Diameter=21.86"
Magnitude=-2.33
CM=61.5° (during mid of capture)
FocalLength=4350mm
Resolution=0.14"
Filename=2020-09-23-0422_1-R-Mars.avi
Date=2020_09_23
Start=042139.450
Mid=042209.451
End=042239.452
Start(UT)=042139.450
Mid(UT)=042209.451
End(UT)=042239.452
Duration=60.002s
Date_format=yyyy_MM_dd
Time_format=HHmmss
LT=UT
-5h
Frames
captured=12863
File
type=AVI
Extended
AVI mode=true
Compressed
AVI=false
Binning=no
ROI=400x408
ROI(Offset)=808x320
FPS
(avg.)=214
Shutter=3.000ms
Gain=200
(33%)
Gamma=50
AutoExposure=off
SoftwareGain=10
(off)
AutoHisto=75
(off)
Brightness=1
HighSpeed=off
USBTraffic=100
AutoGain=off
FPS=100
(off)
Histogramm(min)=0
Histogramm(max)=2
Histogramm=0%
Noise(avg.deviation)=0.00
Limit=60
Seconds
Sensor
temperature=23.2°C
Focuser
position=26457
Here's a small 10s
clip from the middle of 2020-09-23-0422_1-R-Mars.avi
Clipped 10s convert
from VLC
https://imgur.com/dYUlRQj.gifv
From AS2 save as png
to GIMP import as layers then gif export
https://i.imgur.com/2B1EXwB.gifv
Each R, B, G avi is
run into AutoStakkert!3 with the following settings, notably use of 3x Drizzle
for the lower resolution I use with the 2.5x PowerMate
After stacking the
above Red 0427UTC channel looks like this at a stack depth of 2000 frames
Then in AstraImage I
use the following simple deconvolution settings
I use a python and
sikuli script to automate my RGB assembly in GIMP after batch sharpening all
mono stacks. Assembly is done with the
Blue channel as the midpoint time of capture.
In this case the Red
from 0427 goes into 0428UTC:
Of note, this is the
whitebalance and exposure settings from the camera, only stacked, lightly
sharpened and color assembly.
After this process
completes for me I sort out the best looking ones and use as many as I can in
WinJUPOS for derotating. This step takes
lots of the noise out especially along the edges and limbs. Here's a sample measurement of the RGB image.
After derotating I
begin post processing in Photoshop.
Here's how it starts:
Derotated only:
I try to use as many
layers as possible so I can blink compare things as I go. ALT+L, D will create a duplicate layer
dialogue in PS. Each one gets a
meaningful name to me, such as Denoise 0.3
Typically I hit with
Topaz DeNoise just a bit, depending upon the base level of noise left after
drotation. This case I used 0.3
After this I will
try to use the InfraRed as a faux Luminisotry but actually it's processed with
a High Pass Filter. In most cases I will
subsitite the Red channel. Essentially
you want to select all, go to CHANNELS, click on the red and copy (CTRL+C),
reclick RGB to go back to color and then back to Layers
Back on layers PS
will create a new layer simply by pasting:
Then I rename to
RasL (Red as Luminosity)
Then grab the High
Pass filter, Filters->Other->High Pass (More detailed explanation here - https://www.photoshopessentials.com/photo-editing/sharpen-high-pass/)
Drop the Radius to
taste and edit the layer name with your notes.
I don't like how it sharpens the edge so typically I will use a
selection tool to trim it.
Often I will also
use PS Curves with the built in High Contrast (CTRL+M)
Now set the layer
mode as Soft Light, toggle it on and off and or set the opacity to taste. In my case I left at 100%
Then I will take
both layers and duplicate and flatten them together into a new RRGB layer.
Now I hit this layer
with Google Nik (https://nikcollection.dxo.com/nik-collection-2012/?awc=18169_1601087977_64434cdc37e2ddfa2a681866ee67a3fe&utm_source=affiliation&utm_medium=awin)
Nik's output
sharpener is fantastic and simple
Here's the settings
I used:
Once done, the new
layer has the same edge/feather selection removal as the high pass filter
section.
Again the new layers
are merged and duplicated
Now I will align the
RGB and balance the color
Color balance is
done with a stare and compare to Damian Peach's photo from nearly the same time
- http://www.damianpeach.com/mars2020/m2020_09_22dp.jpg
I noticed the edges
were quite mis aligned, likely due to poor seeing. I fixed that and then used the new edge, w/o
the misaligned center portion, essentially a reverse selection from the above.
Color balance was
moved via levels:
RGB Input UP to 191
RGB Input UP to 229
GREEN unchanged
BLUE Output down to
197
Then Saturation was
bumped up 11 points
Next I copy the
Color and Sat layer into a new layer and set as Soft Light and bring down the
Opacity of the top layer to 20-30%. This
helps improve contrast as well as darken the limbs to give a bit more of the
original 3d appearance. OF NOTE, I do
bring the LD value down in WinJUPOS when performing derotation.
Finally I flatten
the new soft edge layer and bring the brightness up to near max with levels.
The rest is all text
assembly in GIMP!
Here's the one of
the variants of a 'final' processed measurement