首页 星云 工具 资源 星选 资讯 热门工具
:

PDF转图片 完全免费 小红书视频下载 无水印 抖音视频下载 无水印 数字星空

深度学习的最简易猫脸识别

人工智能 5.61MB 21 需要积分: 1
立即下载

资源介绍:

对梯度下降的初步应用(不包含 正则化等优化)仅是最简单的多层梯度下降和单层梯度下降
import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset num_px = 64 plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' def sigmoid(Z): """ Implements the sigmoid activation in numpy Arguments: Z -- numpy array of any shape Returns: A -- output of sigmoid(z), same shape as Z cache -- returns Z as well, useful during backpropagation """ A = 1/(1+np.exp(-Z)) cache = Z return A, cache def relu(Z): """ Implement the RELU function. Arguments: Z -- Output of the linear layer, of any shape Returns: A -- Post-activation parameter, of the same shape as Z cache -- a python dictionary containing "A" ; stored for computing the backward pass efficiently """ A = np.maximum(0,Z) assert(A.shape == Z.shape) cache = Z return A, cache def relu_backward(dA, cache): """ Implement the backward propagation for a single RELU unit. Arguments: dA -- post-activation gradient, of any shape cache -- 'Z' where we store for computing backward propagation efficiently Returns: dZ -- Gradient of the cost with respect to Z """ Z = cache dZ = np.array(dA, copy=True) # just converting dz to a correct object. # When z <= 0, you should set dz to 0 as well. dZ[Z <= 0] = 0 assert (dZ.shape == Z.shape) return dZ def sigmoid_backward(dA, cache): """ Implement the backward propagation for a single SIGMOID unit. Arguments: dA -- post-activation gradient, of any shape cache -- 'Z' where we store for computing backward propagation efficiently Returns: dZ -- Gradient of the cost with respect to Z """ Z = cache s = 1/(1+np.exp(-Z)) dZ = dA * s * (1-s) assert (dZ.shape == Z.shape) return dZ def load_data(): train_dataset = h5py.File('猫/datasets/train_catvnoncat.h5', "r") train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('猫/datasets/test_catvnoncat.h5', "r") test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes # ================================================================================================= def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(1) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) / np.sqrt(layer_dims[l-1]) #*0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ Z = W.dot(A) + b assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2) the cache of linear_sigmoid_forward() (there is one, indexed L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation = "relu") caches.append(cache) # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = "sigmoid") caches.append(cache) assert(AL.shape == (1,X.shape[1])) return AL, caches def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. cost = (1./m) * (-np.dot(Y,np.log(AL).T) - np.dot(1-Y, np.log(1-AL).T)) cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost def linear_backward(dZ, cache): """ I

资源文件列表:

猫脸识别.zip 大约有32个文件
  1. 猫/
  2. 猫/lr_utils.py 880B
  3. 猫/datasets/
  4. 猫/datasets/test_catvnoncat.h5 602.5KB
  5. 猫/datasets/train_catvnoncat.h5 2.45MB
  6. 猫/猫图/
  7. 猫/猫图/cat8.jpg 57.64KB
  8. 猫/猫图/cat2.jpg 302.91KB
  9. 猫/猫图/cat1.jpg 587KB
  10. 猫/猫图/cat4.jpg 621.36KB
  11. 猫/猫图/cat9.jpg 53.73KB
  12. 猫/猫图/cat7.jpg 159.19KB
  13. 猫/猫图/cat12.jpg 6.04KB
  14. 猫/猫图/cat11.jpg 118.5KB
  15. 猫/猫图/cat10.jpg 387.25KB
  16. 猫/猫图/cat3.jpg 331.71KB
  17. 猫/猫图/cat6.jpg 65.99KB
  18. 猫/猫图/cat5.jpg 92.23KB
  19. 猫/猫咪识别2__多层网络深度学习示例_要求理解.ipynb 287.54KB
  20. 猫/单层猫咪识别/
  21. 猫/单层猫咪识别/lr_utils.py 1.04KB
  22. 猫/单层猫咪识别/dnn_app_utils_v2.py 14.43KB
  23. 猫/单层猫咪识别/猫咪识别.py 3KB
  24. 猫/单层猫咪识别/__pycache__/
  25. 猫/单层猫咪识别/__pycache__/lr_utils.cpython-38.pyc 857B
  26. 猫/dnn_app_utils_v2.py 14.43KB
  27. 猫/多层猫咪识别/
  28. 猫/多层猫咪识别/猫咪识别2__多层网络深度学习示例_要求理解.py 15.92KB
  29. 猫/多层猫咪识别/lr_utils.py 1.04KB
  30. 猫/多层猫咪识别/dnn_app_utils_v2.py 14.43KB
  31. 猫/多层猫咪识别/__pycache__/
  32. 猫/多层猫咪识别/__pycache__/lr_utils.cpython-38.pyc 831B
0评论
提交 加载更多评论
其他资源 qtquickeffectmaker-everywhere-src-6.6.0.zip
qtquickeffectmaker-everywhere-src-6.6.0.zip
hadoop笔记(word版本).zip
hadoop笔记
scala-scala
scala-scala
scala-代码-scala
scala-代码-scala
一些开发用的obj模型、gltf模型、glb模型
一些开发用的obj模型、gltf模型、glb模型
javaweb项目在线试衣系统spring+springMVC+mybatis+mysql-java课程设计毕业设计
本项目是一个基于JavaWeb的在线试衣系统,采用Spring、SpringMVC、MyBatis和MySQL技术栈开发 该源码特别适合用于Java课程设计和毕业设计的学习参考,帮助在校大学生深入理解JavaWeb开发的核心技术和实际应用。无论是Java初学者还是有一定基础的技术爱好者,都可以通过本项目获取丰富的学习资料和实践经验。通过分析和修改该系统,用户能够提升自己的编程能力、前端开发技能和项目管理能力。
javaweb项目学生信息管理系统spring+springMVC+mybatis+mysql-java课程设计毕业设计
本项目是一个基于JavaWeb的学生信息管理系统,采用Spring、SpringMVC、MyBatis和MySQL技术栈开发,旨在为在校大学生提供高效、系统的学生信息管理解决方案。系统包含学生管理、课程管理、教师管理等功能模块,帮助学校和学生实现信息的快速查询与管理。 该源码特别适合用于Java课程设计和毕业设计的参考,帮助学生深入理解JavaWeb开发的关键技术和应用场景。无论是初学者还是有一定基础的Java技术爱好者,都可以通过本项目获取宝贵的学习资料和实践经验。通过对该系统的分析与改进,用户能够提升编程能力、数据库管理能力及项目实施能力。
基于Springboot的旅游网站的设计与实现
基于Springboot的旅游网站的设计与实现,主要采用Springboot,mybatis,vue,mysql,jdk等技术,采用B/S架构,分为前台用户端系统和后台管理员端系统。
基于Springboot的旅游网站的设计与实现 基于Springboot的旅游网站的设计与实现 基于Springboot的旅游网站的设计与实现