Create the final images using the function "map_image()" and plot them, so that you can compare them (and your script) with test_image_mapper.ipynb. We are reading the pixel positions of the IACTs from the fits file in "ctlearn/ctlearn/pixel_pos_files/", which originate from ctapipe-extra.
Hello @Tjark Miener, I noticed that in the 'image shifting' section in the 'image_mapping.py', we have shifted the alternate columns by 1 without checking if they are in the required form as shown here: https://github.com/ai4iacts/hexagdly/blob/master/notebooks/how_to_apply_adressing_scheme.ipynb Should our script to test the code contain images which are not aligned in this particular way, or is the input to our CNN always in the correct form? These fits files also contain rotation information.
So just introduce yourself and sum up the project and your progress so far.
Feel free to copy paste some parts of your application.
Raw image before image-shifting Image after image-shifting After pre-processing, I expanded the dimensions of the image in compliance with the format required for convolution(using Conv2d( ) from hexagdly) Sample output after convolution for different stride values Please have a look at my notebook here Is this what you expected or have I misunderstood it? It's also good that you are familiar with the functionality of the tool and try to solve this problem using CTLearn. @h3li05369 The event that you are showing in your first two plots look good to me! Awesome that you went one step further and get familiar with the usage of hexagdly.
Databases For Research Papers - Using Hexagonal Writing Can Help You
Unfortunately, our data format, which is created through DL1 Data-handler, differs from the MAGIC file. I haven't study this package in detail, so could please explain me your four plots in more detail? @Tjark Miener After pre-processing the image with image_shifting addressing scheme(which was recently integrated into ctlearn from hexagldy by @aribrill), it is fed into the Conv2d layer (which is also imported from hexagdly) for 4 different stride sizes.I struggled for a long time to articulate why I found it so much more powerful than an ‘n-tiered architecture with DI’. But applying P&A gives you some strong conventions to help guide your thoughts. I’ve used Maven extensively in the past and made my peace with it. Everything from, “It’s fine — the structure helps”, to “GET YOUR FILTHY STINKING MODULES AWAY FROM MY CODE.” (Well, the actual words were ‘What? ’ — but my super-power is hearing what isn’t said.) It’s a surprisingly divisive issue!I think it boils down to cementing the concept of ‘inside’ vs. But I’ve heard good things about Gradle and I was curious to use it ‘in anger’, as it were. The reason I choose multi-modules is to manage dependencies.While reading the pixel positions into CTLearn, we already make sure to perform the right rotation and therefore the pixel positions have the required form above.So you don't need to add this check in your script!The four plots are the outputs of plain hexagonal convolution for 4 different stride sizes.The following is the code for the plot: The result after first epoch I hope that I've explained it clearly. Thank you @h3li05369 For the time being, we aren't allow to share CTA private data with you or any non CTA member. A workaround here would be that you fork the CTLearn project, make your changes and then I could set up some runs for you on our gpus.There are different packages like Indexed Conv and Hexag DLy, which has been shown an improvement in performance. Many thanks for you interest in CTLearn and this issue in particular.Our recommendation would be to get to know our code by installing CTLearn on your system and read through it and, if you are already familiar with convolutional neural networks, check out the couple of packages @Tjark Miener recommends above.There are mainly two ways of dealing with raw IACT images captured by cameras made of hexagonal lattices of photo-multipliers.You can either transform the hexagonal camera pixels to square image pixels #56 or you can modify your convolution and pooling methods.