书签 分享 收藏 举报 版权申诉 / 44
上传文档赚钱

类型人工智能(Nilson版-英文课件)-Chap06-1.ppt

  • 上传人(卖家):三亚风情
  • 文档编号:2810284
  • 上传时间:2022-05-28
  • 格式:PPT
  • 页数:44
  • 大小:813KB
  • 【下载声明】
    1. 本站全部试题类文档,若标题没写含答案,则无答案;标题注明含答案的文档,主观题也可能无答案。请谨慎下单,一旦售出,不予退换。
    2. 本站全部PPT文档均不含视频和音频,PPT中出现的音频或视频标识(或文字)仅表示流程,实际无音频或视频文件。请谨慎下单,一旦售出,不予退换。
    3. 本页资料《人工智能(Nilson版-英文课件)-Chap06-1.ppt》由用户(三亚风情)主动上传,其收益全归该用户。163文库仅提供信息存储空间,仅对该用户上传内容的表现方式做保护处理,对上传内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知163文库(点击联系客服),我们立即给予删除!
    4. 请根据预览情况,自愿下载本文。本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
    5. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007及以上版本和PDF阅读器,压缩文件请下载最新的WinRAR软件解压。
    配套讲稿:

    如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。

    特殊限制:

    部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。

    关 键  词:
    人工智能 Nilson 英文 课件 Chap06
    资源描述:

    1、Robot Vision Chapter 6.2IntroductionlComputer visionEndowing machines with the means to “see”lCreate an image of a scene and extract features Very difficult problem for machineslSeveral different scenes can produce identical images.lImages can be noisy .lCannot directly invert the image to reconstru

    2、ct the scene.3Human Vision (1)4Human Vision (2)5Human Vision (3)6Steering an AutomobilelALVINN system Pomerleau 1991,1993Uses Artificial Neural NetworklUsed 30*32 TV image as input (960 input node)l5 Hidden nodel30 output nodeTraining regime: modified “on-the-fly”lA human driver drives the car, and

    3、his actual steering angles are taken as correct labels for the corresponding inputs.lShifted and rotated images were also used for training.ALVINN has driven for 120 consecutive kilometers at speeds up to 100km/h.7Steering an Automobile-ALVINN 8Two stages of Robot Vision (1)lFinding out objects in t

    4、he sceneLooking for “edges” in the imagelEdge:a part of the image across which the image intensity or some other property of the image changes abruptly.Attempting to segment the image into regions.lRegion:a part of the image in which the image intensity or some other property of the image changes on

    5、ly gradually.9Two stages of Robot Vision (2)lImage processing stageTransform the original image into one that is more amendable to the scene analysis stage.Involves various filtering operations that help reduce noise, accentuate edges, and find regions.lScene analysis stageAttempt to create an iconi

    6、c or a feature-based description of the original scene, providing the task-specific information.10Two stages of Robot Vision (3)lScene analysis stage produces task-specific information.If only the disposition of the blocks is important, appropriate iconic model can be (C B A FLOOR)If it is important

    7、 to determine whether there is another block on top of the block labeled C, adequate description will include the value of a feature, CLEAR_C.11Averaging (1)lOriginal image can be represented as an m*n array of numbers. The numbers represent the light intensities at corresponding points in the image

    8、.lCertain irregularities in the image can be smoothed by an averaging operation.lAveraging operation involves sliding an averaging widow all over the image array.12Averaging (2)lSmoothing operation thickens broad lines and eliminates thin lines and small details.lThe averaging window is centered at

    9、each pixel, and the weighted sum of all the pixel numbers within the averaging window is computed. This sum then replaces the original value at that pixel.13Averaging (3)lCommon function used for smoothing is a Gaussian of two dimensions.lConvolving an image with a Gaussian is equivalent to finding

    10、the solution to a diffusion equation when the initial condition is given by the image intensity field.14Averaging (4)15Edge enhancement (1)lEdge: any boundary between parts of the image with markedly different values of some property.lEdges are often related to important object properties.lEdges in

    11、the image occur at places where the second derivative of the image intensity is zero.16Edge enhancement (2)17Combining Edge Enhancement with Averaging (1)lEdge enhancement alone would tend to emphasize noise elements along with enhancing edges.lTo be less sensitive to noise, both operations are need

    12、ed. (First averaging and then edge enhancing)lWe can convolve the one-dimensional image with the second derivative of a Gaussian curve to combine both operation.18Combining Edge Enhancement with Averaging (2)lLaplacian is second-derivate-type operation that enhances edges of any orientation.lLaplaci

    13、an of the two-dimensional Gaussian function looks like an upside-down hat, often called a sombrero function.l Entire averaging/edge-finding operation can be achieved by convolving the image with the sombrero function(Called Laplacian filtering)196.4.4 Finding RegionlAnother method for processing ima

    14、ge to find “regions”lFinding regions Finding outlines20A region of the imagelA region is homogeneous.The difference in intensity values of pixels in the region is no more than some A polynomial surface of degree k can be fitted to the intensity values of pixels in the region with largest error less

    15、than lFor no two adjacent regions is it the case that the union of all the pixels in these two regions satisfies the homogeneity property.lEach region corresponds to a world object or a meaningful part of one.21Split-and-merge method 1.The algorithm begins with just one candidate region, the whole i

    16、mage.2.Until no more splits need be made.1.For all candidate regions that do not satisfy the homogeneity property, are each split into four equal-sized candidate regions.3.Adjacent candidate regions are merged if their pixels satisfying homogeneity property.2223Regions Found by Split Merge for a Gri

    17、d-World Scene (from Fig.6.12)24“Cleaned Up” the regions found by Split-and-merge methodlEliminating very small regions (some of which are transitions between larger regions).lStraightening bounding lines.lTaking into account the known shapes of objects likely to be in the scene.256.4.5 Using Image A

    18、ttributes Other Than IntensitylImage attributes other than the homogeneity Visual texturelfine-grained variation of the surface reflectivity of the objectslEx) a field of grass, a section of carpet, foliage in tree, the fur of animalslThe reflectivity variations in objects cause similar fine-grained

    19、 structure in image intensity.26Methods for analyzing texturelStructural methodsRepresent regions in the image by a tessellation (花纹) of primitive “texels” small shapes comprising black and white partslStatistical methodsBased on the idea that image texture is best described by a probability distrib

    20、ution for the intensity values over regions of the image.Ex) an image of a grassy field in which the blades of grass are oriented vertically a probability distribution that peaks for thin, vertically oriented regions of high intensity, separated by regions of low intensity27Other attributeslIf we ha

    21、d a direct way to measure the range from the camera to objects in the scene, we could produce a “range image” and look for abrupt range differences.Range image : each pixel value represents the distance from the corresponding point in the scene to the camera.lMotion, color286.5 Scene Analysis (1)lSc

    22、ene AnalysisExtracting from the image the needed information about the sceneRequires either additional images (for stereo vision) or general information about the kinds of scenes, since the scene-to-image transformation is many-to-one.lThe required knowledge very general or quite specificexplicit or

    23、 implicit296.5 Scene Analysis (2)lKnowledge of surface reflectivity characteristics and shading of intensity in the image give information about the shape of smooth objects in the scene.lIconic scene analysisBuild a model of the scene or parts of the scenelFeature-based scene analysis Extracts featu

    24、res of the scene needed by taskTask-oriented or purposive vision306.5.1 Interpreting Lines and Curves in the ImagelInterpreting the line drawingAssociation between scene properties and the components of a line drawinglTrihedral vertex polyhedraThe scene to contain only planar surfaces such that no m

    25、ore than three surfaces intersect in a point31Three kinds of edges in Trihedral vertex polyhedra (1/2)lThere are only three kinds of ways in which two planes can intersect in a scene edge. Occlude lOne kind of edge is formed by two planes, with one of them occluding the other.l labeled in Fig. 6.15

    26、with arrows ().l the arrowhead pointing along the edge such that surface doing the occluding is to the right of the arrow.32Three kinds of edges in Trihedral vertex polyhedra (2/2)BladelTwo planes can intersect such that both planes are visible in the scene.lTwo surfaces form a convex edge.lLabeled

    27、with pluses (+).FordlEdge is concave.lLabeled with minus ()33Labels for Lines at Junctions34Line-labeling scene analysis (1/2)1.Labeling all of the junctions in the image as V, W, Y, or T junctions according to the shape of the junctions in the image35Line-labeling scene analysis (2/2)2.Assign +, ,

    28、or labels to the lines in the image.lAn image line that connects two junctions must have a consistent labeling.lIf there is no consistent labeling, there must have been some error in converting the image into a line drawing. the scene must no have been one of trihedral polyhedra.lConstraint satisfac

    29、tion problem366.5.2 Model-Based Vision (1/2)lIf, we knew that the scene contained a parallelepiped (in Figure 6.15), we could attempt to fit a projection of a parallelepiped to components of an image of this scene.l A generalized cylinders as building blocks for model constructionl Each cylinder has

    30、 9 parameters.37Model-Based Vision (2/2)lAn example rough scene reconstruction of a human figureHierarchical representationEach cylinder in the model can be articulated into a set of smaller cylinders386.6 Stereo Vision and Depth InformationlDepth information can be obtained using stereo vision, whi

    31、ch based on triangulation calculations using two (or more) images.lSome depth information can be extracted from a single image.The analysis of texture in the image can indicate that some elements in the scene are closer than are others.More precise depth information; If we know that a perceived obje

    32、ct is on the floor and the camera height above the floor, we can calculate the distance to the object.39Depth Calculation from a Single Image40Stereo VisionlStereo vision uses triangulation.lTwo lenses whose centers are separated by a baseline, b.lThe image point of a scene point, at distance d, cre

    33、ated by these lenses.lThe angles of these image points from the lens centers, , .lThe optical axes are parallel, the image planes are coplanar, and the scene point is in the same plane as that formed by two parallel optical axes.41Triangulation in Stereo Vision42The main complication in stereo visio

    34、nlIn scenes containing more than one point, it must be established which pair of points in the two images correspond to the same scene point.lWe must be able to identify a corresponding pixel in the other image. correspondence problem43Techniques for correspondence problemlGeometric analysis reveals

    35、 that we need only search along one dimension (epipolar line).lOne-dimensional searches can be implemented by cross-correlation of two image intensity profiles along corresponding epipolar lines.lWe do not have to find correspondences between individual pairs of image points but can do so between pairs of larger image components, such as lines.44AssignmentslPage 111112Ex.6.2, Ex. 6.3

    展开阅读全文
    提示  163文库所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
    关于本文
    本文标题:人工智能(Nilson版-英文课件)-Chap06-1.ppt
    链接地址:https://www.163wenku.com/p-2810284.html

    Copyright@ 2017-2037 Www.163WenKu.Com  网站版权所有  |  资源地图   
    IPC备案号:蜀ICP备2021032737号  | 川公网安备 51099002000191号


    侵权投诉QQ:3464097650  资料上传QQ:3464097650
       


    【声明】本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是网络空间服务平台,本站所有原创文档下载所得归上传人所有,如您发现上传作品侵犯了您的版权,请立刻联系我们并提供证据,我们将在3个工作日内予以改正。

    163文库