JB
On 11/15/2010 12:09 PM, Garcia-Acosta, Marcos wrote:
Fredi,
Moments are averages, pure math ; ) one more reason the younger people should pay attention to math ; )
Open cv is an open source set of functions, maintained by willow garage, the robot makers.
Cheers and keep up the work.
From: JB Rajsky [mailto:jbr85@cornell.edu]
Sent: Monday, November 15, 2010 8:55 AM
To: Faridodin Lajvardi
Cc: John Rangel; John Harris; ken whitley; John Ayala; Garcia-Acosta, Marcos; chaudhari@phxhs.k12.az.us
Subject: Re: Problem Aligning Ourselves With The Orange Marker
Hello Fredi,
I have some answers:
1) A bounding box is a rectangular box around a set of points, usually one that attempts to minimize its area while enclosing all of the points.
2) http://en.wikipedia.org/wiki/Image_moment
You can do all sorts of cool things with images moments (e.g. calculate angles, mass, and location; use them as a feature set for object recognition; etc.). Any decent vision library (like OpenCV) should have all sorts of built-in image moment functions.
3) You can see most of the mission elements in TRANSDEC, but it's really hard to dead reckon the course without some sort of position measurement from a DVL. There's also a significant amount of metal in TRANSDEC which can adversely affect your compass readings, so I'd recommend incorporating an inertial measurement unit (IMU) into your vehicle. These sensors measure acceleration and rotation rate so you can somewhat reliably filter out magnetic distortion in the compass.
4) A controller is something that tries to match desired values (such as heading, pitch, depth, speed, etc.) to measurements from your sensors, so it would be very different from vision. You would essentially have your mission and vision code running separately from the controller and setting desired values that the controller would then try to reach ( http://en.wikipedia.org/wiki/Controller_(control_theory)). To communicate among various running programs, we've developed a custom Shared Memory system based on standardized POSIX libraries that allows us to create variables that all of the programs can read and write to. This could also be done, in theory, with some sort of SQL database (we've considered trying it, but our current system works really well) or a proprietary system (I believe there's something called Microsoft Robotics Studio that does something like this).
JB
On 11/14/2010 03:43 PM, Faridodin Lajvardi wrote:
JB
Here is a video of progress so far with the vsion code
http://www.youtube.com/watch?v=Wy2BB5atcF0&feature=player_embedded
also our webpage with updates on our AUV
http://falconroboticsteam842.org/AUVSI.aspx
I have several questions.
1. what is a bounding box, is that the box we are creating now?
2. what are image moments- I have been hounding the programmers that they need to have a - and + slope so they can properly adjust the AUVs heading, they claim they don't need it. But I see by your answer I was right.
3. During the mission how accurately do you know where the placement of the mission props are. Can you use a compass heading and depth to get where you want go without the vision?
4. You suggested writng a controller, is this another program that get input from the vision code or program? Would this be on a separate thread from the vision code? We are getting ready to make a test bed that is basically a computer with the compass and some motor controllers set up on a little table that we can roll around the hallways. The idea is to test our "controller" or driving program to see if we can follow a set of lines, using heading and our vision to adjust, much like the guide lines setup in the mission senario. Obviously we won't have depth yet. what do you think? Also how do you get data from one program into another program like meshing the controller program and the vision program?
some of these questions may be not needed by my programmers, but I am making sure I understand, because if I do, they will......
Faridodin "Fredi" Lajvardi KD7WKD
480-266-9929 cell phone
Home Phone 480-813-2475
Carl Hayden Community High School
602-764-3000, ext 60233
3333 W. Roosevelt
Phoenix Arizona 85009
"To create a world where science and technology are celebrated... where young people dream of becoming science and technology heroes"
Dean Kamen, Founder of FIRST
Center For Marine Science Program Director
Falcon Robotics Team Sponsor, US FIRST ROBOTICS Team 842
Falcon Robotics ROV Team Sponsor
FIRST Tech Challenge Partner
Arcrunner Electric Vehicle Racing Team Sponsor,1st and longest running program in the nation!
Vice President-APASE, Arizona Promoters of Applied Science in Education
Thank you and have a great day!
Websites with projects we are involved with:
Carl hayden High School Robotics team webpage
http://www.phxhs.k12.az.us/education/club/club.php?sectionid=3670
National Underwater Robotics Challenge website
www.h2orobots.org
FIRST Tech Challenge for Arizona
http://azfirsttech.org/default.aspx
Arizona Promoters of Applied Science in Education
http://apaseplace.org/default.aspx
Arizona FIRST Lego League
http://azlego.googlepages.com/
FIRST AZ
http://firstaz.org/default.aspx
--------------------------------------------------------------------------------
Date: Sun, 14 Nov 2010 13:05:32 -0700
Subject: Fwd: Problem Aligning Ourselves With The Orange Marker
From: johnrangel842@gmail.com
To: ke7jlm@gmail.com; coachfredi@gmail.com
---------- Forwarded message ----------
From: JB Rajsky
Date: Fri, Nov 12, 2010 at 8:43 PM
Subject: Re: Problem Aligning Ourselves With The Orange Marker
To: "John Rangel(kf7fdb)"
Hello John,
Something to note is that if you follow pipes in order, they should never be rotated more than 90 degrees, so this helps with direction. I'd recommend fitting a bounding box around the rectangle and then using image moments to calculate the angle--this should give you +/- angles. Lastly, I don't think you want the vision code to directly control the heading of the vehicle. I'd suggest writing a controller that uses your sensors to maintain a desired heading, depth, speed, etc., and having the vision code set desired heading, for example. This will allow for much easier testing and debugging than controlling motors in vision.
JB
On 11/12/2010 06:29 PM, John Rangel(kf7fdb) wrote:
Hey jb, I have a question about how you guys aligned yourselves with the orange marker. How did you guys do it because the way we are planning to do it is with a combination of our compass and vision. On our vision code, the way we are detecting orientation is by getting the two top points of the rectangle and then calculate the slope of those two points. However, one thing we found in our program is that when you rotate the camera, the points change. We found that it is because the way we are getting our points is by least to greatest on the x-axis. This is not helpful for us because we do not get a negative slope and therefor do not know which direction to tell our robot to turn. One theory we had was to use the compass to get the general idea of of which direction to go for example North. Then we would see if the compass is facing and would use that to tell the robot which direction to rotate. Then we would continue rotating until the slope we are getting = 0. We are not sure if this is the best way to do it. What do you think.
--
John Rangel
KF7FDB
Programmer for the Falcon Robotics Team
--
John Rangel
KF7FDB
Programmer for the Falcon Robotics Team



