Posts

Day 31

Presentation day! I'll admit that I was pretty nervous before going up to present. There wasn't much reason to feel so pressured, but the other interns did so well, so it felt like there was a standard to be met. As per tradition, my presentation was under time, even though it was over time when I was rehearsing the night before. Not a big deal. My biggest concern going into today was that nobody would understand what I was talking about. After presenting, I still got the feeling that nobody understood what I was talking about. I guess I'll never know for sure. Dmitry did ask a good question though (to which I provided a mediocre response), so I suppose that proves something. I think all of the interns did really well today, and this was a great way to end the internship. Postmortem time! Coming into this internship, I had nagging doubts about the program, most of which, to be honest, revolved around it being unpaid. These doubts were first addressed on the day of my

Day 30

Last day before presentations. The SMCAE tests weren't going anywhere, so I fell back to the old data. I added it to my presentation, applied some finishing touches, and gave it to Joe. After lunch, Titus rehearsed his presentation in the auditorium, and Aditi, Anjana, and I gave him a lot of feedback. He managed to cut the time down significantly, but the organization of his presentation made it a little bit confusing, even to Aditi. I didn't feel like I had to rehearse mine, since I've been pretty good on time. At the lab, there wasn't much to do, since Ron was out for most of the day and all of the presentation work was done. Nate did receive instructions for the GRSS challenge from Ron, so I helped him with that for the rest of the day.

Day 29

This morning, we ran through all of the presentations in the auditorium. Joe was really strict about staying within the time limit because he wanted to finish the presentations by noon, both today and on Thursday. Today we finished almost exactly at noon, but in all likelihood, that's not happening on Thursday. I found myself stuttering and going into less depth than I did with the lab rehearsals. With time to spare, I think I could do well by speaking slower and focusing on the important content. The only feedback I got was a question from Joe: "What's the difference between your work and Nate's?" Nate and I gave a lackluster answer, but I genuinely think that compacting our presentations into 15 minutes would be a bit impractical. Possible, maybe, but I wouldn't want to.

Day 28

Week 6 of the internship; the final presentation is on Thursday. It doesn't feel like week 6, but that's probably because I missed almost all of week 4. Regardless, there was much work to do before Thursday, so Nate and I rehearsed our presentations twice in front of the lab. Many revisions were made, and I'm feeling much more prepared for the real deal. Thank you to everybody who was willing to listen to four lame high school presentations about the same topic in the same day. Your contribution will make the very same presentations significantly less lame on Thursday. Apart from presentation-related stuff, Ron and I ran some ladder network and SMCAE experiments all day. Ron tackled the ladder network with PELU activation (which was apparently very difficult) and I continued with adjusting initial layer sizes on the ladder network and doing SMCAE runs with fixed component count PCA as a feature scaler. The results aren't looking too hot, but I'll withhold judgem

Day 27

Nate and I ran our presentations today in front of the lab and got some feedback. I was worried about going over 10 minutes so I rushed the presentation and ended much quicker than 10 minutes. It was pretty sloppy too, but now I have a better idea of what needs to get changed. It should go much smoother on Monday. The ladder network tests were running overnight, and didn't finish by the time I came in today. In fact, as of the time I left today, they still didn't finish. I think they should be done Saturday morning, at which point I'll start some new tests.

Day 26

Today we made some promising progress on both experiments. Ron noticed that the ladder network performed exceptionally well on features generated by 4-5 layer SMCAEs and suggested that the ladder network may work best when the first layer reduces dimensionality instead of adding dimensionality. This theory was completely contrary to the results of the original paper and the "gold standard" MNIST ladder network, so I was a bit skeptical, but there was no reason not to try the same approach with 1-3 layer SMCAEs. So far, these kinds of models work marginally better than an SVM. I will withhold judgement until we do more trials. On the SCAE/SMCAE side of the experiments, Ron suggested that since the stacks with more layers produce more features, the SVM used to classify may be prone to overfitting (explaining our poor results from last time). Hence, we should perform PCA before classifying with a fixed number of output components so that the number features remains constant. I

Day 25

Today we switched directions a bit and tried the ladder network on SMCAE-generated features. It's performing a little better than the SVM, which is looking promising for my presentation and Ron's paper. We also looked into the SCAE data from a while ago, and it honestly looks pretty bad. The multiloss CAEs aren't performing very well, so we might have to reframe that experiment too. Luckily the CAE vs. MCAE data is still pretty good, so it's not a complete loss. I also sat in on a DIRS meeting, where Prof. Rottman (from Ben-Gurion University of the Negev) gave a funny and informative presentation on target detection in hyperspectral images. It's an interesting problem, and he explained the mathematics behind it quite well.