Click Points is a program written in the Python programming language, which serves on the one hand as an image viewer
and on the other hand as an data display and annotation tool. Every frame can be annotated by a description, marked
points/tracks, or marked areas (paint brush). This helps to view image data, do manual evaluation of data, help to create
semi-automatic evaluation or display the results of automatic image evaluation.
ClickPoints can be updated like any other python package with:
pipinstallclickpoints--upgrade
You can see the current version number when you open ClickPoints and click on the cog wheels icon to open the options dialog. In the top right corner you will see the current version number.
If you want to actively work on the ClickPoints code, you should clone the repository. First of all you need to have git installed (Git or directly a git client e.g. GitHub Desktop).
Then you can open a command line in the folder where you want to install ClickPoints (e.g. C:Software) and run the following command:
Once ClickPoints has been installed it can be started directly from the Start Menu/Program launcher.
This will open ClickPoints with an empty project.
Images can be added to the project by using .
The project can be saved by clicking on .
ClickPoints can also be used to directly open images, videos or folder by right clicking on them, which will open an
unsaved project which already contains some images. This way ClickPoints functions as an image viewing tool.
ClickPoints can be opened with various files as target:
an image, loading all images in the folder of the target image.
a video file, loading only this video.
a folder, loading all image and video files of the folder and its sub folders, which are concatenated to one single image stream.
a previously saved .cdbClickPoints Project file, loading the project as it was saved.
Pressing Esc closes ClickPoints.
To easily access marker, masks, track or other information, stored in the .cdb ClickPoints Project file,
we provide a python based Database API
Attention
If you plan to evaluate your data set or continue working on the same data set you must save the project -
otherwise all changes will be lost upon closing the program. If a project was saved, all changes are saved
automatically upon frame change or by pressing S
ClickPoints opens with a display of the current image fit to the window. The display can be
zoomed, using the mouse wheel
panned, holding down the right mouse button
rotated using R.
To fit the image into the window press F and switch to full screen mode by pressing W.
Note
Default rotation on startup or and rotation steps with each press of R can be defined in the
ConfigClickPoints.txt with the entries rotation= and rotation_steps=.
ClickPoints was designed with multiple usage keys in mind, and therefore provides multiple ways to open files.
Attention
Opening a set of files for the first time can take some time to extract time and meta information
from the filename, TIFF or EXIF header. For large collections of files it is recommended to save the collection
as a project and use the .cdb file for starting ClickPoints. Saving time as no file system search is necessary
and all meta information is already stored in the .cdb
ClickPoints can be started empty by using a desktop link or calling ClickPoints.bat from CMD (Windows),
or respectively ClickPoints from a terminal (Linux).
ClickPoints can be run directly from the commandline, e.g. to open the files in the current or a specific folder
ClickPoints"C:\Images"
or
pythonClickPoints.py-srcfile="C:\Images"
Note
To use the short version of calling ClickPoints without the path, you have to add ClickPoints base path to
the systems or users PATH variable (Windows) or create an alias (Linux).
Furthermore it is possible to supply a text file where each line contains the path to an image or video file.
This is useful e.g. to open a fixed set of files, a list of files extract by another application or a database interface.
The config file contains parameters to adjust ClickPoints to the users needs,
adjusts default behaviour or configure key bindings. A GUI is available to
set parameters during runtime, the ConfigFile is used to set default behaviour.
Config files in ClickPoints are designed to optimize the work flow.
Upon opening a file the path structure is searched for the first occurrence of a valid config file.
Thereby allowing the user to specify default for files grouped in one location.
If no config file is found, the default config values as set in click points base path are used (green).
A config file located in the path “research” (blue) will overwrite these values and is used for files opened in child paths.
Allowing the user to define a preferred setup. In addition we can add a second config file lower in the path tree to specify
a specific setup for all files that were stored under “Experiment_3”. This can contain a set of default marker names,
which features to use or which add-ons to include.
This tutorial gives a shot introduction how to get started manually labeling your own tracks,
for a quick evaluation or ground truths for the evaluation of automated algorithms.
Marked results and correlated images must be stored some where, there for the project hast to be named and saved.
Click on the save button and select a storage location and file name.
Note
Reference to images and video is stored relative as long a the files reside parallel or below in the path tree.
If the files reside above or on a different branch, drive, or network location, the absolute path is stored.
Before we can get started we have to specify a marker type. Marker types are like classes of objects, e.g. we might use
a class for birds and another one for ships. Every marker type can have multible tracks.
To open the marker menu either press F2 or click on the Marker button to switch to edit mode (Fig. A).
Then right click onto the marker list to open the marker menu (Fig. B). You can reuse the default marker or create a new marker
by selecting +addtype. Choose a name and color for your new marker type and make sure to set the type to TYPE_track.
Confirm your changes by pressing save.
To add more tracking types select +addtype and repeat the procedure.
left & right cursor keys to go one frame forward and backward
Jump a specified set of frames with the numbad keys. See Jumping Frames
Use the frame and time navigation slider to by clicking or dragging the cursor to the desired position.
Jump to a specific frame by clicking on the frame counter and entering the desired frame number
Press to play the dataset with the specifed frame rate or as fast as feasible.
Note
Due to the sequential compression of videos, traversing a video backwards is computational expensive. ClickPoints provides a
buffer so that the last N frames are stored and can be retrieved without any further computational cost. The default buffer size
can be specified in the config.
Warning
Be careful not to reserve too much RAM for the frame buffer as it will drastically reduce performance!
The setup steps are completed, we can begin to mark some tracks.
Activite the type of marker you want to use by clicking on the label “bird” or press the associated number key.
Set the first marker by clicking on the image.
Switch to the next frame using the right cursor key.
The track now shows up with reduced opacity, indicating there is no marker for the current frame.
Upon dragging the marker (left click & hold) to the current position (release) a line indicates the connection to the last position. The track shows up with full opacity again.
If a frame is skipped, the marker can be dragged as usual but no connecting line will appear. Indicating a fragmentation of the track.
To create a second track, repeat step 1.
Markers are automatically save upon frame change or by pressing the S key.
For low density tracks ClickPoints provides the “connect nearest” mode. Clicking on the image will automatically connect
the new marker to the closest Track in the last frame. Speeding up tracking for low track density scenes. The dragging of
markers is still support and is usefull for intersecting tracks.
To activate “connect nearest” mode, set the config parameter tracking_connect_nearest=True.
ClickPoints provides two timelines for navigation, a frame based and a timestamp based timeline. The frame based timeline
is used by default, the timestamp timeline can be activated if time information of the displayed files is available.
Time information extraction is implemented for the filename, the EXIF or TIFF headers.
Frame Timeline example showing tick marks for marker and annotations.¶
The timeline is an interface at the bottom of the screen which displays range of currently loaded frames and allows for
navigation through these frames. It can be displayed by clicking on .
To start/stop playback use the playback button at the left of the timeline or press Space. The label next to
it displays which frame is currently displayed and how many frames the frame list has in total.
The time bar has one slider to denote the currently selected frame and two triangular marker to select start and
end frame of the playback. The keys b and n set the start/end marker to the current frame.
The two tick boxes at the right contain the current frame rate and the number of frames to skip during playback
between each frame.
To go directly to a desired frame simply click on the frame display (left) and enter the frame number.
Each frame which has selected marker or masks is marked with a green tick mark (see Marker and
Mask) and each frame marked with an annotation (see Annotations) is marked with a
red tick. To jump to the next annotated frame press Ctrl+Left or Ctrl+Right.
The date timeline displays the timestamps of the loaded data set.
To navigate to desired time point simply drag the current position marker or click on the point on the date timeline.
The timeline can be panned and zoomed by holding the left mouse button (pan) und the mouse wheel (zoom).
It aims to make it easier to get an idea of the time distribution of the data set,
to find sections of missing data and facilitate navigation by a more meaningful metric than frames.
The extraction of timestamps by filename is fast than by EXIF. If you plan to repeatedly open files, without using
a .cdb to store the time stamps, renaming them once might be beneficial.
A list of timestamp search strings can be specified in the confing file as shown in the code example below. As the
search will be canceled after the first match it is necessary to order the the search strings by decreasing complexity.
enable or disable the date timeline by setting this value to True or False
timestamp_formats= (list of strings)
list of match strings for images, with decreasing complexity
timestamp_formats2= (list of strings)
list of match strings for videos with 2 timestamps, with decreasing complexity
# default values:# for image formats with 1 timestamptimestamp_formats=[r'%Y%m%d-%H%M%S-%f',# e.g. 20160531-120011-2 with fraction of secondr'%Y%m%d-%H%M%S']# e.g. 20160531-120011# for video formats with 2 timestamps (start & end)timestamp_formats2=[r'%Y%m%d-%H%M%S_%Y%m%d-%H%M%S']
The gamma correction is a slider box in the right bottom corner which allows to change the brightness and gamma of the
currently displayed image. It can be opened by clicking on .
The box in the bottom right corner shows the current gamma and brightness adjustment. Moving a slider changes the
display of the currently selected region in the images. The background of the box displays a histogram of brightness
values of the current image region and a red line denoting the histogram transform given by the gamma and brightness
adjustment. Pressing update the key G sets the currently visible region of the image as the active region for
the adjustment. Especially for large images it increases performance significantly if only a portion of the
image is adjusted. A click on reset resets gamma and brightness adjustments.
The same image for different gamma values or 1, 0.5 and 1.5.¶
The gamma value changes how bright and dark regions of the images are
treated. A low gamma value (<1) brightens the dark regions up while
leaving the bright regions as they are. A high gamma value (>1) darkens
the dark regions of the image while leaving the bright regions as they
are.
The same image for different brightness values, where once the lower and once the upper range was adjusted.¶
The brightness can be adjusted by selecting the Max and Min values.
Increasing the Min value darkens the image by setting the Min value (and
everything below) to zero intensity. Decreasing the Max value brightens
the image by setting the Max value (and everything above) to maximum
intensity.
The video exporter allows for the export of parts of the currently loaded images as a video, image
sequence of gif file. It can be opened useing the or by pressing z. An dialog will open, which
allows to select an output filename for a video, an image sequence (which has to contain a %d number placeholder) or a
gif file. Frames are exported starting from the start marker in the timeline to the end marker in the timeline. The
framerate is also taken from the timeline. Images are cropped according to the current visible image part in the main window.
An example of both the annotation editor and the annitation overview window.¶
Annotations are text comments which can include a rating and tags, which is attached to a frame. To annotate a frame or
edit the annotation of a frame press A or and fill in the information in the dialog. The
frame will be marked with a red tick in the timeline. To get a list of all annotated frames press Y.
In this list clicking an annotation results in a jump to the frame of the annotation.
An example image showing three different marker types and some markers placed on the image.¶
Marker are added to a frame to refer to pixel positions. Marker can have different types to mark different objects.
They can also be used in tracking mode to recognize an object over different frames.
The marker editor can be opened by clicking on .
The list of available markers is displayed at the top left corner. A marker type can be selected either by clicking on
its name or by pressing the corresponding number key. A left click in the image places a new marker of the currently
selected type. Existing markers can be dragged with the left mouse button and deleted by clicking on them while
holding the control key.
To save the markers press S or change to the next image, which automatically saves the current markers.
MB1
place a marker or track point at the current mouse position
TYPE_Normal results in single markers. TYPE_Rect joins every two consecutive markers as a rectangle. TYPE_Line joins
every two consecutive markers as a line. TYPE_Track specifies that this markers should use tracking mode (see section
Tracking Mode).
Pressing T toggles between three different marker displays. If the smallest size is selected, the markers can’t be
moved. This makes it easier to work with a lot of markers on a small area.
The same marker in different size configurations.¶
Often objects which occur in one image also occur in another image (e.g. the images are part of a video). Then it is
necessary to make a connection between the object in the first image and the object in the second image. Therefore
ClickPoints features a tracking mode, where markers can be associated between images. It can be enabled using the
TYPE_Track for a marker type. The following images displays the difference between normal mode (left) and tracking
mode (right):
The same marker in normal mode (left) and in tracking mode (right). The track always displays all previous positions
connected with a line, when they are from two consecutive images.¶
To start a track, mark the object in the first image. Then switch to the next image and the marker from the first image
will still be displayed but only half transparent. To add a second point to the track grab the marker and move it to the
new position of the object. Continue this process through the images where you want to track the object. If the object
didn’t move from the last frame or isn’t visible, an image can be left out, which results in a gap in the track. To
remove a point from the track, click it while holding control.
The Marker Editor is used to manage marker types. New marker types can be created, existing ones can be modified or
deleted.
The Marker Editor used to create and change marker types, navigate to tracks and marks and delete marker,
tracks and types¶
Creating Marker Types
To create a new marker type open the marke editor via or right click on the marker display or a marker.
Select the +addtype field, enter a name, set the marker mode to marker, line, rectangle or track and choose a color.
Further modifications can be achieved via the text and style field, for more details see the following sections.
Editing Marker Types
To edit a marker type, simply select the type from the menu, chenges the desired values and save the changes by pressing Save
Note
It is NOT possible to change marker types as long as marker objects of this type exist. E.g. you can’t make lines out
of regular markers as they don’t have a second point.
Navigation
The editor can also be used to navigate. Selecting a marker will bring you to the frame the marker is placed in.
By clicking on the arrow in front of the type name the marker or track overview unfolds. Selecting a marker of a track
will bring you to the frame it is placed in.
Deleting Types, Tracks and Markers
Types, tracks and markers can be removed by selecting the object in the tree and pressing the Remove button.
By removing a marker type all markers and tracks of this type are removed, removing a track will remove all markers
of this track.
Style definitions can provide additional features to change the appearance of marker. They are inherited from the marker
type to the track and from the track to the marker itself. If no track is present the marker inherits its style
directly from the type. This allows to define type, track and marker specific styles.
Styles can be set using the Marker Editor (right click on any marker or type).
The styles use the JSON format for data storage. The following fields can be used:
Marker Color - "color":"#FF0000"
Defines the color of the marker in hex format.
Color can also be a matplotlib colormap followed optionally by a
number (e.g. jet(30)), then that many colors (default 100) are
extracted from the color map and used for the marker/tracks to color
every marker/track differently.
Marker Shape - "shape":"cross"
Defines the shape of the marker. All shapes can be converted to outlines by appending “-o” to the name.
The line width of the line used to display gaps in the track history.
Track Marker Shape - "track-point-shape":"circle"
The marker shape used to display the track history.
values:circle, ring (default), rect, cross, none
Track Marker Scale - "track-point-scale":1
The scaling of markers used to display the track history.
Style Examples:
{"color":"jet(30)"}# style for providing a marker type with 30 different colors{"track-line-style":"dash","track-point-shape":"none"}# change the track style
The text field allows to attache text to marker, line, rectangle and track objects.
Text properties are inherited from the marker type to the track and from the track to the marker itself.
If no track is present the marker inherits its text directly from the type.
This allows to define type, track and marker specific texts.
Text can be set using the Marker Editor (right click on any marker or type).
ClickPoints provides a SmartText feature, enabling the display of self updating text in to display pre defined values.
SmartText keyword always start with a $ character.
The keywords are depending on the type for marker, as explained in the following overview:
General
\n
insert a new line
$marker_id
inserts the id of the marker, line or rectangle object
$x_pos
inserts the x position of the marker, first marker of a line or top left marker of a rectangle
$y_pos
inserts the x position of the marker, first marker of a line or top left marker of a rectangle
Line
$length
inserts the length of the line in pixel with 2 decimals.
Rectangle
$area
inserts the area of the rectangle in pixel with 2 decimals.
Track
$track_id
inserts the track id of the track.
Text Examples:
# regular TextMarker:"Hello World!"# shows the text Hello World!# SmartTextTrack:"ID_$track_id"# shows the track IDLine:"$x_pos | $y_pos \n$length px"# shows the x & y coordinate and lengthRect:"ID_$marker_id\n$x_pos | $y_pos \n$area px²"# shows the object_id, its x & y coordinate and area
Using regular text and SmartText features for lines, rectangles and tracks¶
An image where 7 regions have been marked with different masks.¶
A mask can be painted to mark regions in the image with a paint brush. The mask editor can be opened by clicking on
.
A list of available mask colors is displayed in the top right corner. Switching to paint mode can be done using the key
P, pressing it again switches back to marker mode. Colors can be selected by clicking on its name or pressing the
corresponding number key. Holding the left mouse button down draws a line in the mask using the selected color. To
save the mask press S or change to the next image, which automatically saves the current mask.
The mask type delete acts as an eraser and allows to remove labeled regions.
A right click on a color name opens the mask editor menu, which allows the creation, modification and deletion of mask
types. Every mask type consists of a name and a color.
The color can alternatively to selection via number buttons or a click on the names be selected by using K to select
the color which is currently below the cursor.
Updating masks can be slow if the images are very large. To enable fast painting of large masks, ClickPoints can disable
the automatic updates of the mask by disabling the option AutoMaskUpdate. If automatic updates are disabled
the key M redraws the currently displayed mask.
Example of the info hud displaying time and exposure exif data from a jpg file.¶
This info hud can display additional information for each image. Information can be obtained from the filename, jpeg exif
information or tiff metadata or be provided by an external script.
The text can be set using the options dialog. Placeholders for additional information are written with curly brackets {}.
The keyword from the source (regex, exif or meta) is followed by the name of the information in brackets [], e.g.
{exif[rating]}. If the text is set to @script the info hud can be filled using an external script.
Use \n to start a new line.
To extract data from the filename a regular expression with named fields has to be provided.
The values presented in the meta field of tiff files varies by the tiff writer. ClickPoints can only access tiff meta data
written in the json format in the tiff meta header field, as done by the tifffile python package.
Add-ons are helpful scripts which are not part of the main ClickPoints program, but can be loaded on demand to do some evaluation task.
They can be loaded by clicking on and selecting the add-on from the list. ClickPoints already comes
with a couple of add-ons, but it is easy to add your own or extend existing ones.
Each add-on will be assigned to a key from F12 downwards (F12, F11, F10 and so on) and a button will
appear for each add-on next to . Hitting the key or pressing the button will start the run function
of the add-on in a separate thread, or tell the thread to stop if it is already running.
To configure ClickPoints to already have scripts loaded on startup, you can define them in the ConfigClickPoints.txt
file as launch_scripts=.
For writing your own add-ons please refer to the Add-on API.
This add-on takes markers in one image and tries to find the corresponding image parts in the subsequent images.
To use it, open a ClickPoints session and add the add-on Track.py by clicking on .
Create a marker type with mode TYPE_Track. Mark every object which should be tracked with a marker of this type. Then hit F12 (or the button you
assigned the Track.py to) and watch the objects to be tracked. You can at any point hit the key again to stop the tracking.
If the tracker has made errors, you can move the marker by hand and restart the tracking from the new position.
The algorithm uses the position using a sparse iterative Lucas-Kanade optical flow algorithm [1].
Attention
If the markers are not in a TYPE_Tracking type, they are not tracked by Track.py. Also maker which already have
been tracked are only tracked again, if they were moved in ClickPoints.
Jean-Yves Bouguet. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corporation, 5(1-10):4, 2001.
Click on and select the button “+ add type” to add a mask type for each cell.
3. Paint the cell area for each cell in each image¶
Click on the paint brush and the name of the mask-type. Then paint the area of the cell in the image. Use the arrow keys
to navigate through the images and paint the cell in each image. From these regions the green image channel will be
summed as the total fluorescence intensity of the cell at this time.
Repeat this process for all images.
See also
For more information on the usage of masks, see the page on Mask.
Click on the button “connect” on the left side of the window. Now click on one cell and drag the mouse to another cell
that should be connected to the first cell. Repeat this for all cells you want to link. Links indicate that diffusion
between these cells is allowed and a diffusion value will be fitted for this link later.
Links are only needed in one image of the sequence, not for all images.
See also
For more information on the usage of markers, see the page on Marker.
Click in the button “Calculate Intensities”. The add-on sums all pixel intensity values for the regions covered by the masks.
The outputs can be seen in the table directly under the button. The graph right of the table shows the intensities over time.
Click on the button “Calculate Diffusion”. The add-on tries to find diffusion constants, so that the diffusion equation
describes the change of the fluorescence intensities in the cells. The resulting values are printed in the table under
the button and the simulated diffusion are shown in the graph right of the table.
This add-on takes a region in the image and tries to find it in every image. The offset saved for every image to correct
for drift in the video.
To use it, open a ClickPoints session and add the add-on DriftCorrection.py by clicking on .
When you first start the script a marker type named drift_rect is created. Use this type to select a region in the
image which remains stable over the course of the video. Start the drift correction script by using F12 (or the key
the script is connected to). The drift correction can be stopped and restarted at any time using the key again.
An image of cell nuclei before and after executing the Cell Detector addon.¶
This add-on is designed to take a microscope image of fluorescently labeled cell nucleii and find the coordinates of
every cell.
To use it, open a ClickPoints session and add the add-on CellDetector.py by clicking on .
Start the cell detector script by using F12 (or the key the script is connected to). All found cell nucleii will
be labeled with a marker.
Attention
The Cell Detector won’t work for cells which are too densely clustered. But ClickPoints allows you to review and adjust
the results if some cells were not detected.
The two axis are marked with the corresponding markers and the data points with the data markers. The start and end
points of the axis are assigned a text containing the corresponding axis value.¶
This add-on helps to retrieve data from plots.
To use it, open a ClickPoints session and add the add-on GrabPlotData.py by clicking on .
Sometimes it is useful to extract data from plotted results found in publications to compare them with own results or
simulations. ClickPoints therefore provides the add-on “GrabPlotData”. It uses three marker types. The types “x_axis” and
“y_axis” should be used to mark the beginning and end of the x and y axis of the plot. Markers should be assigned a text
containing the value which is associated with this point on the axis. These axis markers are used to remap the pixel
coordinates of the “data” markers to the values provided by the axis. These remapped values are stored in a “.txt” file
that has the same name as the image.
Attention
This can only be used if the axe is not scaled logarithmically. Only linear axes are supported.
The examples provide some usage examples of ClickPoints to demonstrate its various functions and how data can be
processed with ClickPoints and later easily evaluated with ClickPoints.
To kept the download size of ClickPoints down, the examples are kept in a separate repository. They can be downloaded
here.
This example will explain how to retrieve markers from a existing clickpoints database and use them for evaluation. The example database contains three images with markers of two different types (“adult” and “juvenile”).
First two simple examples explain how to receive the data from the database, then two examples also show how to plot the data, and finally one example shows how to add markers to a database.
[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import clickpoints
# load the example data
clickpoints.loadExample("king_penguins")
Iterave over the images using getImages() to get Image objects. Then query the Marker objects for each image and for the “adult” marker types (see getMarkers())
[2]:
# open the database
with clickpoints.DataFile("count.cdb") as db:
# iterate over images
for index, image in enumerate(db.getImages()):
# get the "adult" markers of the current image
markers = db.getMarkers(image=image, type="adult")
# print the results
print("image:", image.filename, "\t", "adult penguins:", markers.count())
Or we can get the positions of each juvenile penguin in each image as a Nx2 array, by converting the response of getMarkers() to an array.
[3]:
# open the database
with clickpoints.DataFile("count.cdb") as db:
# iterate over images
for index, image in enumerate(db.getImages()):
# get the "adult" markers of the current image
markers = db.getMarkers(image=image, type="juvenile")
# convert the result to an array
markers_array = np.array(markers)
# print the results
print("image:", image.filename, "\t", "juvenile penguin positions:", markers_array.shape)
We can iterate over all markers and get the image filename and its position.
[4]:
# open the database
with clickpoints.DataFile("count.cdb") as db:
# get all markers without any filtering
markers = db.getMarkers()
# iterate over all markers (limit here to the first 10 to keep the print short)
for marker in markers[:10]:
# get the image assigned to the marker (and print its filename)
# get the type assigned to the marker (and print its name)
# and print x and y position of the marker
print(marker.image.filename, marker.type.name, marker.x, marker.y)
# initialize empty lists
marker_count = []
image_timestamp = []
# open the database
with clickpoints.DataFile("count.cdb") as db:
# iterate over images
for index, image in enumerate(db.getImages()):
# get the "adult" markers of the current image
markers = db.getMarkers(image=image, type="adult")
# store the timestamp and marker count each in a list
image_timestamp.append(image.timestamp)
marker_count.append(markers.count())
# plot the lists
plt.plot(image_timestamp, marker_count, "o-")
# label the plot
plt.xticks(image_timestamp)
plt.xlabel("time of image")
plt.ylabel("count of adult penguins")
plt.show()
Now we want to plot an example image with the image and the markers as points in the image.
We can now use getImage() to get the first image of the sequence and load the data from this file. This can now be displayed with matplotlib. Then we use the image and type keyword of the getMarkers() function to filter out only markers from this image and the given type.
[6]:
# open the database
with clickpoints.DataFile("count.cdb") as db:
# get the first image
im_entry = db.getImage(0)
# we load the pixel data from the Image database entry
im_pixel = im_entry.data
# plot the image
plt.imshow(im_pixel)
# get the adults positions in the image and convert it to an array
adult_positions = np.array(db.getMarkers(image=im_entry, type="adult"))
# plot the coordinates of the markers
plt.plot(adult_positions[:, 0], adult_positions[:, 1], 'C0o', ms=2)
# get the juveniles positions in the image and convert it to an array
juvenile_positions = np.array(db.getMarkers(image=im_entry, type="juvenile"))
# plot the coordinates of the markers
plt.plot(juvenile_positions[:, 0], juvenile_positions[:, 1], 'C1o', ms=2)
# zoom into the image
plt.xlim(100, 2000)
plt.ylim(1200, 400)
plt.show()
Here, we create a new database (open in write mode “w”, which creates a new database) and add all images we find in the current folder to it (setImage()).
To add markers to the image, we first need to define a marker_type (setMarkerType()). Then we add markers to each image (setMarkers()). In this case for demonstration purposes, we just add the markers at random positions.
[7]:
from pathlib import Path
# open the database
with clickpoints.DataFile("result.cdb", "w") as db:
# add a marker type we want to use for adding markers
# it has the name "nosie" and the color "#FF0000", e.g. red
# make it a type for normal markers (default)
marker_type = db.setMarkerType(name="noise", color="#FF0000", mode=db.TYPE_Normal)
# find all images in the folder and iterate over them
for image_filename in Path(".").glob("*_microbs_GoPro.jpg"):
# add the image to the database
im = db.setImage(image_filename)
# draw some random position where we want to add markers
x, y = np.random.rand(2, 10)
# add the marker positions to the image with the given marker type
# instead of giving the marker type, we could also just provide its name "noise"
db.setMarkers(image=im, x=x, y=y, type=marker_type)
This example will explain how to retrieve tracks from a existing clickpoints database and use them for evaluation. The example database contains 68 images with track markers of 12 tracks.
[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import clickpoints
# load the example data
clickpoints.loadExample("magnetic_tweezer")
We query all the tracks found in the database and iterate over them. For each track we get the points as an Nx2 array.
Be aware that this method does not handle well tracks with missing data points.
[2]:
# open database
with clickpoints.DataFile("track.cdb") as db:
# get all tracks
tracks = db.getTracks()
# iterate over all tracks
for track in tracks:
print(track.type.name, track.points.shape)
We can also receive all the tracks of the database in one array. This is usually quicker for large databases than querying each track separately. The resulting array is (Number of tracks)x(Number of images)x2. If a track does not have a marker in one images, np.nan values are filled into the array.
[3]:
# open database
with clickpoints.DataFile("track.cdb") as db:
data = db.getTracksNanPadded()
print(data.shape)
Then we get all the tracks with getTracks(). In this example without any filtering, as we just want all the tracks available. We can iterate ofer the received object and access the Track objects. From them we can get the points and calculate their displacements with respect to their starting points.
[4]:
# open database
with clickpoints.DataFile("track.cdb") as db:
# get all tracks
tracks = db.getTracks()
# iterate over all tracks
for track in tracks:
# get the points
points = track.points
# calculate the distance to the first point
distance = np.linalg.norm(points[:, :] - points[0, :], axis=1)
# plot the displacement
plt.plot(track.frames, distance, "-o")
# label the axes
plt.xlabel("# frame")
plt.ylabel("displacement (pixel)");
We can do the same with the complete track array, which is especially for large databases significantly faster than querying each track separately.
[5]:
# open database
with clickpoints.DataFile("track.cdb") as db:
# get all tracks (tracks x images x 2)
points = db.getTracksNanPadded()
# get the distance to the first point of each track
distance = np.linalg.norm(points[:, :, :] - points[:, 0:1, :], axis=2)
# plot the distances
plt.plot(distance.T, "-o")
# label the axes
plt.xlabel("# frame")
plt.ylabel("displacement (pixel)");
We can now use getImage() to get the first image of the sequence and load the data from this file. This can now be displayed with matplotlib. To draw the tracks, we itarate over the track list again and plot all the points, as well, as the starting point of the track for a visualisation of the tracks.
[6]:
# open database
with clickpoints.DataFile("track.cdb") as db:
# get the first image
im_entry = db.getImage(0)
# we load the pixel data from the Image database entry
im_pixel = im_entry.data
# plot the image
plt.imshow(im_pixel, cmap="gray")
# iterate over all tracks
for track in tracks:
# get the points
points = track.points
# plot the beginning of the track
cross, = plt.plot(points[0, 0], points[0, 1], '+', ms=14, mew=1)
# plot the track with the same color
plt.plot(points[:, 0], points[:, 1], lw=3, color=cross.get_color())
# plot the track id with a little offset and the same color
plt.text(points[0, 0]+5, points[0, 1]-5, "#%d" % track.id, color=cross.get_color(), fontsize=15)
# zoom into the image
plt.xlim(600, 800)
plt.ylim(400, 200);
Here, we create a new database (open in write mode “w”, which creates a new database) and add all images we find in the current folder to it (setImage()).
To add tracks to the image, we first need to define a marker_type (setMarkerType()). Then we create track objects for all the tracks we want to create (setTrack()). For add track markers for each track for each image (setMarkers()). In this case for demonstration purposes, we just create random walk tracks.
[7]:
from pathlib import Path
# Fix the seed
np.random.seed(0)
# we want 10 tracks
N = 10
# open database
with clickpoints.DataFile("tracking.cdb", "w") as db:
# add a marker type we want to use for adding track markers
# it has the name "trajectories" and the color "#FF0000", e.g. red
# make it a type for tracks markers (TYPE_Track)
track_type = db.setMarkerType(name="trajectories", color="#FF0000", mode=db.TYPE_Track)
# create the new tracks with the type
# (Note, instead of giving the track type, we could also just provide its name "trajectories"
tracks = []
for i in range(N):
track = db.setTrack(type=track_type)
tracks.append(track)
# Create initial positions
points = np.random.rand(N, 2)
# find all images in the folder and iterate over them
for image_filename in Path(".").glob("frame*.jpg"):
# add the image to the database
im = db.setImage(image_filename)
# Move the positions (in this case we just create random walk tracks)
points += np.random.rand(N, 2)-0.5
# Save the new positions
db.setMarkers(image=im, x=points[:, 0], y=points[:, 1], track=tracks)
Left: image of clickpoints to count penguins. Right: number of penguins counted.¶
In the example, we show how the ClickPoints can be used to count animals in an image.
The example contains some images recorded with a GoPro Hero 2 camera, located at the Baie du Marin King penguin colony on Possession Island of the Crozet Archipelago [Bohec137]. Two marker types, “adult” and “juvenile” where added in ClickPoints to count two types of animals.
The the counts can be evaluated using a small script.
Open the the database where the animals were clicked.
[2]:
%matplotlib inline
import matplotlib.pyplot as plt
import clickpoints
# open database
db = clickpoints.DataFile("count.cdb")
path count.cdb
Open database with version 18
Iterave over the images using getImages() to get Image objects. Then query the Marker objects for each image and for the two marker types (see getMarkers())
[3]:
# iterate over images
for index, image in enumerate(db.getImages()):
# get count of adults in current image
marker = db.getMarkers(image=image, type="adult")
bar1 = plt.bar(index-0.15, marker.count(), color='C0', width=0.3)
# get count of juveniles in current image
marker = db.getMarkers(image=image, type="juvenile")
bar2 = plt.bar(index+0.15, marker.count(), color='C1', width=0.3)
# add labels
plt.ylabel("# of animals")
plt.xticks([0, 1, 2], ["image 1", "image 2", "image 3"])
# add a lagend
plt.legend((bar1[0], bar2[0]), ("adult", "juvenile"))
# display the plot
plt.show()
Now we want to plot an example image with the image and the markers as points in the image.
We can now use getImage() to get the first image of the sequence and load the data from this file. This can now be displayed with matplotlib. Then we use the image and type keyword of the getMarkers() function to filter out only markers from this image and the given type.
[4]:
# get the first image
im_entry = db.getImage(0)
# we load the pixel data from the Image database entry
im_pixel = im_entry.data
# plot the image
plt.imshow(im_pixel, cmap="gray")
# iterate over the adults in the image
for marker in db.getMarkers(image=im_entry, type="adult"):
# plot the coordinates of the marker
plt.plot(marker.x, marker.y, 'C0o', ms=2)
# iterate over the juveniles in the iamge
for marker in db.getMarkers(image=im_entry, type="juvenile"):
# plot the coordinates of the marker
plt.plot(marker.x, marker.y, 'C1o', ms=2)
# zoom into the image
plt.xlim(100, 2000)
plt.ylim(1200, 400)
plt.show()
Left: image of a plant root in ClickPoints. Right: fluorescence intensities of the cells over time.¶
In the example, we show how the mask panting feature of ClickPoints can be used to evaluate fluorescence intensities in
microscope recordings.
Images of an Arabidopsis thaliana root tip, obtained using a two-photon confocal microscope [Gerlitz2016], recorded at
1 min time intervals are used. The plant roots expressed a photoactivatable green fluorescent protein, which after
activation with a UV pulse diffuses from the activated cells to the neighbouring cells.
For each time step a mask is painted to cover each cell in each time step.
The fluorescence intensities be evaluated using a python script.
Open the the database where the masks are painted:
[2]:
import re
import numpy as np
from matplotlib import pyplot as plt
import clickpoints
# open the ClickPoints database
db = clickpoints.DataFile("plant_root.cdb")
path plant_root.cdb
Open database with version 18
Get a list of Image objects (getImages()) and a list of all MaskType objects (getMaskTypes()). Then we iterate over the images, load the green channel of the image (image.data[:, :, 1]) and get the mask data for that image (image.mask.data). The mask data is a numpy array with the same dimensions as the image having 0s for the background and the value mask_type.index for each pixel that belongs to the mask_type. Therefore we iterate over all the mask types and filter the pixels of the mask that belong to each type. This mask can then be used to filter the pixels of the green channel that belong to the MaskType.
[3]:
# get images and mask_types
images = db.getImages()
mask_types = db.getMaskTypes()
# regular expression to get time from filename
regex = re.compile(r".*(?P<experiment>\d*)-(?P<time>\d*)min")
# initialize arrays for times and intensities
times = []
intensities = []
# iterate over all images
for image in images:
print("Image", image.filename)
# get time from filename
time = float(regex.match(image.filename).groupdict()["time"])
times.append(time)
# get mask and green channel of image
mask = image.mask.data
green_channel = image.data[:, :, 1]
# iterate over the mask types
intensity = []
for mask_type in mask_types:
# filter from the mask the current mask type
mask_for_this_type = (mask == mask_type.index)
# calculate the mean intenstiry of this cell in the green channel
mean_intensitry_in_cell = np.mean(green_channel[mask_for_this_type])
# and add it to the list
intensity.append(mean_intensitry_in_cell)
# add all the mean intensities of the cells in this image to a list
intensities.append(intensity)
We now plot the intensitry for each cell over the time. The label of each line is the name of the corresponding MaskType.
[4]:
# convert lists to numpy arrays
intensities = np.array(intensities).T
times = np.array(times)
# iterate over cells
for mask_type, cell_int in zip(mask_types, intensities):
plt.plot(times, cell_int, "-s", label=mask_type.name)
# add legend and labels
plt.legend()
plt.xlabel("time (min)")
plt.ylabel("mean intensity")
# display the plot
plt.show()
How we want to visualize the cells in the image. Therefore we fetch the first image and its mask. Then we iterate over all mask types and draw countours around each masked region. Then we plot the centroid of the mask and the mask index.
[5]:
from skimage.measure import regionprops, label, find_contours
# get the first image
image = db.getImage(0)
# get the corresponding mask data
mask_data = image.mask.data
# iterate over all mask types
for mask_type in mask_types:
# get the mask data for that mask type
mask = (mask_data == mask_type.index)
# get the contour of the masked region and draw it
contour = find_contours(mask, 0.5)[0]
line, = plt.plot(contour[:, 1], contour[:, 0], '-', lw=1)
# get the centroid and draw a dot with the same color
prop = regionprops(label(mask))[0]
y, x = prop.centroid
plt.plot(x, y, 'o', color=line.get_color())
# and a text showing the index of the mask type with the same color
plt.text(x+3, y+3, '%d' % mask_type.index, color=line.get_color(), fontsize=15)
# draw the image
plt.imshow(image.data)
# and zoom in
plt.xlim(170, 410)
plt.ylim(430, 600)
plt.show()
Supervised Tracking of Fiducial Markers in Magnetic Tweezer Measurements¶
Left: the image of beads on cells loaded in ClickPoints. Right: displacement of beads.¶
In the example, we show how the ClickPoints addon Track can be used to track objects in an image and how the resulting tracks can later on be used to calculate displacements. [Bonakdar2016]
The data we show in this example are measurements of a magnetic tweezer, which uses a magnetic field to apply forces on cells. The cell is additionally tagged with non magnetic beads, with are used as fiducial markers.
The images can be opened with ClickPoints and every small bead (the fiducial markers) is marked with a marker of type tracks. Then the Track addon is started to finde the position of these beads in the subsequent images.
The resulting tracks can now be accessed and evaluated with Python and the ClickPoints package. Therefore we first open the ClickPoints file:
[2]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# connect to ClickPoints database
# database filename is supplied as command line argument when started from ClickPoints
import clickpoints
db = clickpoints.DataFile("track.cdb")
path track.cdb
Open database with version 18
Then we get all the tracks with getTracks(). In this example without any filtering, as we just want all the tracks available. We can iterate ofer the received object and access the Track objects. From them we can get the points and calculate their displacements with respect to their staring points.
[3]:
# get all tracks
tracks = db.getTracks()
# iterate over all tracks
for track in tracks:
# get the points
points = track.points
# calculate the distance to the first point
distance = np.linalg.norm(points[:, :] - points[0, :], axis=1)
# plot the displacement
plt.plot(track.frames, distance, "-o")
# label the axes
plt.xlabel("# frame")
plt.ylabel("displacement (pixel)")
[3]:
<matplotlib.text.Text at 0x2b3f44019e8>
We can now use getImage() to get the first image of the sequence and load the data from this file. This can now be displayed with matplotlib. To draw the tracks, we itarate over the track list again and plot all the points, as well, as the starting point of the track for a visualisation of the tracks.
[4]:
# get the first image
im_entry = db.getImage(0)
# we load the pixel data from the Image database entry
im_pixel = im_entry.data
# plot the image
plt.imshow(im_pixel, cmap="gray")
# iterate over all tracks
for track in tracks:
# get the points
points = track.points
# plot the beginning of the track
cross, = plt.plot(points[0, 0], points[0, 1], '+', ms=14, mew=1)
# plot the track with the same color
plt.plot(points[:, 0], points[:, 1], lw=3, color=cross.get_color())
# plot the track id with a little offset and the same color
plt.text(points[0, 0]+5, points[0, 1]-5, "#%d" % track.id, color=cross.get_color(), fontsize=15)
# zoom into the image
plt.xlim(600, 800)
plt.ylim(400, 200)
Navid Bonakdar, Richard Gerum, Michael Kuhn, Marina Spörrer, Anna Lippert, Werner Schneider, Katerina E Aifantis, and Ben Fabry. Mechanical plasticity of cells. Nature Materials, 2016.
Using ClickPoints for Visualizing Simulation Results¶
Left: Tracks of the random walk simulation in ClickPoints. Right: Tracks plotted all starting from (0, 0).¶
Here we show how ClickPoints can be apart from viewing and analyzing images also be used to store simulation results in a ClickPoints Project file. This has the advantages that the simulation can later be viewed in ClickPoints, with all the features of playback, zooming and panning. Also the coordinates of the objects used in the simulation can later be accessed through the ClickPoints database file.
This simple example simulates the movement of 10 object which follow a random walk.
First some imports:
[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import clickpoints
Then we define the basic parameters for the simulation.
We create a new database. The attribute “w” stands for write, which ensures that we create a new database.
[3]:
# create new database
db = clickpoints.DataFile("sim.cdb", "w")
path sim.cdb
We define a new marker type with the name “point” the color red (“#FF0000”) and the type TYPE_Track. Then we create N track instances for it.
[4]:
# Create a new marker type
type_point = db.setMarkerType("point", "#FF0000", mode=db.TYPE_Track)
# Create track instances
tracks = [db.setTrack(type_point) for i in range(N)]
We create some random initial positions for the tracks. We now iterate over the amout if frames we want to simulate. For each iteration we add a new image to the database using setImage(), then we add some random movement to the positions and store the new positions for the tracks using setMarkers(). Here we have to provide the current image and the list of Track objects.
[5]:
# Fix the seed
np.random.seed(0)
# Create initial positions
points = np.random.rand(N, 2)*size
# iterate
for i in range(frame_count):
# Create a new frame
image = db.setImage("frame_%03d" % i, width=size, height=size)
# Move the positions
points += np.random.rand(N, 2)-0.5
# Save the new positions
db.setMarkers(image=image, x=points[:, 0], y=points[:, 1], track=tracks)
After the simulation, we want to access the tracks again to plot them. From every track we query the points and plot them.
[6]:
# plot the results
for track in tracks:
plt.plot(track.points[:, 0], track.points[:, 1], '-')
# adjust the plot ranges
plt.xlim(0, size)
plt.ylim(size, 0)
# and adjust the scaling to equal
plt.axis("equal")
plt.show()
We now plot the tracks in a different fashion, as a “rose” plot, where all the tracks start at the same position.
[7]:
# plot the results
for track in tracks:
# get the points
points = track.points
# substract the inital point
points = points-points[0, :]
# plot the track from the new origin
plt.plot(points[:, 0], points[:, 1], '-')
# and adjust the scaling to equal
plt.axis("equal")
plt.show()
ClickPoints comes with a powerful API which enables access from within python to ClickPoints Projects which are stored in a .cdb ClickPoints SQLite database.
To get started reading and writing to a database use:
The .cdb file consists of multiple SQL tables in which it stores its information. Each table is represented in the API
as a peewee model. Users which are not familiar can use the API without any knowledge of peewee, as the API provides
all functions necessary to access the data. For each table a get (retrieve entries), set (add and change entries)
and delete (remove entries) function is provided. Functions with a plural name always work on multiple entries at once
and all arguments can be provided as single values or arrays if multiple entries should be affected.
Merge the track with the given track. All markers from the other track are moved to this track. The other track
which is then empty will be removed. Only works if the tracks don’t have markers in the same images, as this would
cause ambiguities.
Parameters:
track(int,Track) - the track id or track entry whose markers should be merged to this track.
Returns:
count(int) - the amount of new markers for this track.
processed(bool) - a flag that is set to 0 if the marker is manually moved in ClickPoints, it can be set from an add-on if the add-on has already processed this marker.
style(str) - the style definition of the marker.
text(str) - an additional text associated with the marker. It is displayed next to the marker in ClickPoints.
track(Track) - the track entry the marker belongs to. Only for TYPE_Track.
correctedXY()(array) - the marker position corrected by the offset of the image.
pos()(array) - an array containing the coordinates of the marker: [x, y].
new_type(str, intMarkerType) - the id, name or entry for the marker type which should be the new type of this marker. It has to be of mode TYPE_Normal.
processed(bool) - a flag that is set to 0 if the line is manually moved in ClickPoints, it can be set from an add-on if the add-on has already processed this line.
style(str) - the style definition of the line.
text(str) - an additional text associated with the line. It is displayed next to the line in ClickPoints.
correctedXY()(array) - the line positions corrected by the offset of the image.
pos()(array) - an array containing the coordinates of the line: [x, y].
length()(float) - the length of the line in pixel.
angle()(float) - the angle of the line to the horizontal in radians.
Crop a line of the given image provided by the line. If a width is given, a two dimensional region is cropped
from the image, if not a one dimensional array is returned
Parameters:
image(ndarray, * :py:class:`Image` *) - the image as a database entry or a numpy array.
width(int, optional) - the width of the 2D line to crop from the image.
processed(bool) - a flag that is set to 0 if the rectangle is manually moved in ClickPoints, it can be set from an add-on if the add-on has already processed this line.
style(str) - the style definition of the rectangle.
text(str) - an additional text associated with the rectangle. It is displayed next to the rectangle in ClickPoints.
correctedXY()(array) - the rectangle positions corrected by the offset of the image.
pos()(array) - an array containing the coordinates of the rectangle: [x, y].
slice_x(border=0)(slice) - a slice object to use the rectangle to cut out a region of an image
slice_y(border=0)(slice) - a slice object to use the rectangle to cut out a region of an image
slice(border=0)(tuple) - a tuple of a y-slice and an x-slice, border specifies an additional border to slice around the rectangle
new_type(str, intMarkerType) - the id, name or entry for the marker type which should be the new type of this rectangle. It has to be of mode TYPE_Rect.
processed(bool) - a flag that is set to 0 if the ellipse is manually moved in ClickPoints, it can be set from an add-on if the add-on has already processed this ellipse.
style(str) - the style definition of the ellipse.
text(str) - an additional text associated with the ellipse. It is displayed next to the ellipse in ClickPoints.
center(array) - an array containing the coordinates of the center of the ellipse: [x, y].
new_type(str, intMarkerType) - the id, name or entry for the marker type which should be the new type of this ellipse. It has to be of mode TYPE_Ellipse.
processed(bool) - a flag that is set to 0 if the polygon is manually moved in ClickPoints, it can be set from an add-on if the add-on has already processed this polygon.
style(str) - the style definition of the polygon.
text(str) - an additional text associated with the polygon. It is displayed next to the polygon in ClickPoints.
center(array) - an array containing the coordinates of the center of the polygon: [x, y].
new_type(str, intMarkerType) - the id, name or entry for the marker type which should be the new type of this polygon. It has to be of mode TYPE_Polygon.
The DataFile is the interface to the ClickPoints database. This can either be used in external evaluation scripts that
take data clicked in ClickPoints for further evaluation or in add-on scripts where it is accessible through the self.db
class variable.
The DataFile class provides access to the .cdb file format in which ClickPoints stores the data for a project.
Parameters
database_filename (string) – the filename to open
mode (string, optional) – can be ‘r’ (default) to open an existing database and append data to it or ‘w’ to create a new database. If the mode is ‘w’ and the
database already exists, it will be deleted and a new database will be created.
image (int, Image, array_like, optional) – the image/images for which the annotations should be retrieved. If omitted, frame numbers or filenames should be specified instead.
frame (int, array_like, optional) – frame number/numbers of the images, which annotations should be returned. If omitted, images or filenames should be specified instead.
filename (string, array_like, optional) – filename of the image/images, which annotations should be returned. If omitted, images or frame numbers should be specified instead.
timestamp (datetime, array_like, optional) – timestamp/s of the annotations.
comment (string, array_like, optional) – the comment/s of the annotations.
rating (int, array_like, optional) – the rating/s of the annotations.
id (int, array_like, optional) – id/ids of the annotations.
image (int, Image, array_like, optional) – the image/images for which the mask should be deleted. If omitted, frame numbers or filenames should be specified instead.
frame (int, array_like, optional) – frame number/numbers of the images, which masks should be deleted. If omitted, images or filenames should be specified instead.
filename (string, array_like, optional) – filename of the image/images, which masks should be deleted. If omitted, images or frame numbers should be specified instead.
id (int, array_like, optional) – id/ids of the masks.
layer (int, array_like, optional) – layer/layers of the images, which masks should be deleted. Always use with frame!
image (int, Image, optional) – the image for which the annotation should be retrieved. If omitted, frame number or filename should be specified instead.
frame (int, optional) – frame number of the image, which annotation should be returned. If omitted, image or filename should be specified instead.
filename (string, optional) – filename of the image, which annotation should be returned. If omitted, image or frame number should be specified instead.
id (int, optional) – id of the annotation entry.
create (bool, optional) – whether the annotation should be created if it does not exist. (default: False)
image (int, Image, array_like, optional) – the image/images for which the annotations should be retrieved. If omitted, frame numbers or filenames should be specified instead.
frame (int, array_like, optional) – frame number/numbers of the images, which annotations should be returned. If omitted, images or filenames should be specified instead.
filename (string, array_like, optional) – filename of the image/images, which annotations should be returned. If omitted, images or frame numbers should be specified instead.
timestamp (datetime, array_like, optional) – timestamp/s of the annotations.
tag (string, array_like, optional) – the tag/s of the annotations to load.
comment (string, array_like, optional) – the comment/s of the annotations.
rating (int, array_like, optional) – the rating/s of the annotations.
id (int, array_like, optional) – id/ids of the annotations.
Returns
entries – a query object containing all the matching Annotation entries in the database file.
start_frame (int, optional) – start at the image with the number start_frame. Default is 0
end_frame (int, optional) – the last frame of the iteration (excluded). Default is None, the iteration stops when no more images are present.
skip (int, optional) – how many frames to jump. Default is 1
layer (int, string, optional) – layer of frames, over which should be iterated.
Returns
image_iterator – an iterator object to iterate over Image entries.
Return type
iterator
Examples
1importclickpoints23# open the database "data.cdb"4db=clickpoints.DataFile("data.cdb")56# iterate over all images and print the filename7forimageindb.GetImageIterator():8print(image.filename)
image (int, Image, array_like, optional) – the image/images for which the mask should be retrieved. If omitted, frame numbers or filenames should be specified instead.
frame (int, array_like, optional) – frame number/numbers of the images, which masks should be returned. If omitted, images or filenames should be specified instead.
filename (string, array_like, optional) – filename of the image/images, which masks should be returned. If omitted, images or frame numbers should be specified instead.
id (int, array_like, optional) – id/ids of the masks.
layer (int, optional) – layer of the images, which masks should be returned. Always use with frame.
order_by (string, optional) – sorts the result according to sort paramter (‘sort_index’ or ‘timestamp’)
Returns
entries – a query object containing all the matching Mask entries in the database file.
Return an array of all track points with the given filters. The array has the shape of [n_tracks, n_images, pos],
where pos is the 2D position of the markers.
image (int, Image, optional) – the image for which the mask should be set. If omitted, frame number or filename should be specified instead.
frame (int, optional) – frame number of the images, which masks should be set. If omitted, image or filename should be specified instead.
filename (string, optional) – filename of the image, which masks should be set. If omitted, image or frame number should be specified instead.
data (ndarray, optional) – the mask data of the mask to set. Must have the same dimensions as the corresponding image, but only
one channel, and it should be using the data type uint8.
id (int, optional) – id of the mask entry.
layer (int, optional) – the layer of the image, which masks should be set. always use with frame.
checkShape (bool, optional) – check if the maks and image have the same shape
ClickPoints allows to easily write add-on scripts.
Note
The Add-ons section demonstrates how the add-ons can be used and may serve as a good starting point
to write custom add-ons.
The add-on consists of at least two files. A meta data file with .txt ending which contains basic information on the add-on and a script file
providing an overloaded class of clickpoints.Addon as shown above.
The add-on files can be located in the ClickPoints add-on folder (/path-to-clickpoints/clickpoints/addons/) or in an
externally folder and be imported manually on each use.
Furthermore, ClickPoints offers a way for python packages, to define ClickPoints addons. Therefore, place a file called
__clickpoints_addon__.txt in the main folder of the package (usually the child folder of the folder where the setup.py
is located). The __clickpoints_addon__.txt file can contain the path to the meta data file (ending in .txt) of the
add-on. The paths are defined relative to the folder that contains the __clickpoints_addon__.txt file. A package can
define multiple clickpoints add-ons, therefore, each line in __clickpoints_addon__.txt defines the relative path to an
add-on.
Defines the image of the add-on. The image will be displayed in ClickPoints in the add-on list directly above the
description. The image should have a dimension of 300x160 pixel.
Defines a short description for the add-on. If a longer description is desired, a file called Desc.html next to the
*.txt file can be used. This file supports rich text with an html subset defined by Qt Html Subset.
requirements - requirements=xlwt,skimage
Define the packages that this add-on needs. Multiple packages have to be separated by a komma.
The script file has to contain a class called Addon which is derived
from a prototype Add-on class:
1importclickpoints 2 3classAddon(clickpoints.Addon): 4def__init__(self,*args,**kwargs): 5clickpoints.Addon.__init__(self,*args,**kwargs) 6 7print("I am initialized with the database",self.db) 8print("and the ClickPoints interface",self.cp) 910defrun(self,*args,**kwargs):11print("The user wants to run me")
This class will allow you to overload the init function were your add-on can set up its configuration, e.g. add some
new marker types to ClickPoints.
To process data, you can overload the run function. Here the add-on can do it’s heavy work. Some caution has to be
taken when executing interface actions, as run is called in a second thread to not block ClickPoints during its
execution. For a good example of an add-on that uses the run function, refer to the Tracking.
But add-ons can also provide passive features that are not executed by a call of the run method, but rely on callbacks.
Here a good example is the ‘Measure Tool Add-on’, which just reacts on the MarkerMoved callback.
The add-on class has two main member variables: self.db and self.cp.
self.db is a DataFile instance which gives access to the ClickPoints database. For details on the interface see Database API.
self.cp is a Commands instance which allows for communication with the ClickPoints interface.
Add-ons can define their own options that are saved in the database along the ClickPoints options. They are also included
in the ClickPoints options menu and the export of options.
New options should be defined in the __init__ function of the add-on. Therefore, the add-on class has some methods to
add, get and set options:
The button for this add-on was pressed. If not overloaded it will just call self.run_threaded() to executed the
add-on’s self.run method in a new thread.
A typical overloading for gui based add-ons would be to call the self.show method:
Add-ons have some basic functions to communicate with the main ClickPoints window. This interface is accessible through
the self.cp class variable in the add-on class.
status (int) – the button can have three states, STATUS_Idle for an non active button, STATUS_Active for an active button and
STATUS_Running for an active button with an hourglass symbol.
ClickPoints is developed primarily by academics, and so citations matter a lot to
us. Citing ClickPoints also increases it’s exposure and potential user
(and developer) base, which is to the benefit of all users of ClickPoints. Thanks
in advance!
Here is a list of publications that used ClickPoints for their evaluation.
Bonakdar, N., Gerum, R. C., Kuhn, M., Spörrer, M., Lippert, A., Schneider, W., … Fabry, B. (2016). Mechanical plasticity of cells. Nature Materials, 15(10), 1090–1094. https://doi.org/10.1038/nmat4689
Braniš, J., Pataki, C., Spörrer, M., Gerum, R. C., Mainka, A., Cermak, V., … Rosel, D. (2017). The role of focal adhesion anchoring domains of CAS in mechanotransduction. Scientific Reports, 7. https://doi.org/10.1038/srep46233
Gerum, R. C., Richter, S., Winterl, A., Fabry, B., & Zitterbart, D. P. (2017). CameraTransform: a Scientific Python Package for Perspective Camera Corrections. ArXiv http://arxiv.org/abs/1712.07438
Richter, S., Gerum, R., Schneider, W., Fabry, B., Le Bohec, C., & Zitterbart, D. P. (2018). A remote-controlled observatory for behavioural and ecological research: A case study on emperor penguins. Methods in Ecology and Evolution. https://doi.org/10.1111/2041-210X.12971
Gerum, R., Richter, S., Fabry, B., Le Bohec, C., Bonadonna, F., Nesterova, A., Zitterbart, D. (2018). Structural organisation and dynamics in kin penguin colonies, Journal of Physics D: Applied Physics, 51(16), 164004. https://doi.org/10.1088/1361-6463/AAB46B
Richter, S., Gerum, R., Winterl, A., Houstin, A., Seifert, M., Peschel1, J., Fabry, B., Le Bohec, C., Zitterbart, D.P., (2018). Phase transitions in huddling emperor penguin colonies, Journal of Physics D, 51(21), 214002. https://doi.org/10.1088/1361-6463/aabb8e
Gerlitz, N., Gerum, R., Sauer, N., Stadler, R. (2018). Photoinducible DRONPA-s: a new tool to investigate cell-cell connectivity, The Plant Journal, https://doi.org/10.1111/tpj.13918
Pârvulescu, L. (2019). Introducing a new Austropotamobius crayfish species (Crustacea, Decapoda, Astacidae): A Miocene endemism of the Apuseni Mountains, Romania, Zoologischer Anzeiger, 279, 94–102. https://doi.org/10.1016/j.jcz.2019.01.006
Cóndor, M., Mark, C., Gerum, R. C., Grummel, N. C., Bauer, A., García-Aznar, J. M., & Fabry, B. (2019). Breast cancer cells adapt contractile forces to overcome steric hindrance, Biophysical Journal. https://doi.org/10.1016/j.bpj.2019.02.029