当前位置: 首页 > news >正文

自适应网站建设多少钱深圳seo优化外包公司

自适应网站建设多少钱,深圳seo优化外包公司,个人网站的制作,微信小程序购物平台CNN 网络适用于图片识别#xff0c;卷积神经网络主要用于图片的处理识别。卷积神经网络#xff0c;包括一下几部分#xff0c;输入层、卷积层、池化层、全链接层和输出层。 使用 CIFAR-10 进行训练#xff0c; CIFAR-10 中图片尺寸为 32 * 32。卷积层通过卷积核移动进行计…CNN 网络适用于图片识别卷积神经网络主要用于图片的处理识别。卷积神经网络包括一下几部分输入层、卷积层、池化层、全链接层和输出层。 使用 CIFAR-10 进行训练 CIFAR-10 中图片尺寸为 32 * 32。卷积层通过卷积核移动进行计算最终生成特征图。 通过池化层进行降维度 卷积网络结构从输入到输出 3* 32*32 -- 10 类型WeightBIAS卷积(3, 12, 5)(12, 3, 5, 5)12卷积(12, 12, 5)(12, 12, 5, 5)12Norm1212卷积(12, 24, 5)(24, 12, 5, 5)24卷积(24 24, 5)(24, 24, 5, 5)24Norm2424Linear(10, 2400)10 训练分类模型 准备数据 from torchvision.datasets import CIFAR10 from torchvision.transforms import transforms from torch.utils.data import DataLoader# Loading and normalizing the data. # Define transformations for the training and test sets transformations transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ])# CIFAR10 dataset consists of 50K training images. We define the batch size of 10 to load 5,000 batches of images. batch_size 10 number_of_labels 10 # Create an instance for training. # When we run this code for the first time, the CIFAR10 train dataset will be downloaded locally. train_set CIFAR10(root./data,trainTrue,transformtransformations,downloadTrue)# Create a loader for the training set which will read the data within batch size and put into memory. train_loader DataLoader(train_set, batch_sizebatch_size, shuffleTrue, num_workers0) print(The number of images in a training set is: , len(train_loader)*batch_size)# Create an instance for testing, note that train is set to False. # When we run this code for the first time, the CIFAR10 test dataset will be downloaded locally. test_set CIFAR10(root./data, trainFalse, transformtransformations, downloadTrue)# Create a loader for the test set which will read the data within batch size and put into memory. # Note that each shuffle is set to false for the test loader. test_loader DataLoader(test_set, batch_sizebatch_size, shuffleFalse, num_workers0) print(The number of images in a test set is: , len(test_loader)*batch_size)print(The number of batches per epoch is: , len(train_loader)) classes (plane, car, bird, cat, deer, dog, frog, horse, ship, truck)创建网络 import torch import torch.nn as nn import torchvision import torch.nn.functional as F# Define a convolution neural network class Network(nn.Module):def __init__(self):super(Network, self).__init__()self.conv1 nn.Conv2d(in_channels3, out_channels12, kernel_size5, stride1, padding1)self.bn1 nn.BatchNorm2d(12)self.conv2 nn.Conv2d(in_channels12, out_channels12, kernel_size5, stride1, padding1)self.bn2 nn.BatchNorm2d(12)self.pool nn.MaxPool2d(2,2)self.conv4 nn.Conv2d(in_channels12, out_channels24, kernel_size5, stride1, padding1)self.bn4 nn.BatchNorm2d(24)self.conv5 nn.Conv2d(in_channels24, out_channels24, kernel_size5, stride1, padding1)self.bn5 nn.BatchNorm2d(24)self.fc1 nn.Linear(24*10*10, 10)def forward(self, input):output F.relu(self.bn1(self.conv1(input))) output F.relu(self.bn2(self.conv2(output))) output self.pool(output) output F.relu(self.bn4(self.conv4(output))) output F.relu(self.bn5(self.conv5(output))) output output.view(-1, 24*10*10)output self.fc1(output)return output# Instantiate a neural network model model Network()定义损失函数 使用交叉熵函数作为损失函数交叉熵分为两种 二分类交叉熵函数 多分类交叉熵函数 loss_fn nn.CrossEntropyLoss() optimizer Adam(model.parameters(), lr0.001, weight_decay0.0001)模型训练 from torch.autograd import Variable# Function to save the model def saveModel():path ./myFirstModel.pthtorch.save(model.state_dict(), path)# Function to test the model with the test dataset and print the accuracy for the test images def testAccuracy():model.eval()accuracy 0.0total 0.0device torch.device(cuda:0 if torch.cuda.is_available() else cpu)with torch.no_grad():for data in test_loader:images, labels data# run the model on the test set to predict labelsoutputs model(images.to(device))# the label with the highest energy will be our prediction_, predicted torch.max(outputs.data, 1)total labels.size(0)accuracy (predicted labels.to(device)).sum().item()# compute the accuracy over all test imagesaccuracy (100 * accuracy / total)return(accuracy)# Training function. We simply have to loop over our data iterator and feed the inputs to the network and optimize. def train(num_epochs):best_accuracy 0.0# Define your execution devicedevice torch.device(cuda:0 if torch.cuda.is_available() else cpu)print(The model will be running on, device, device)# Convert model parameters and buffers to CPU or Cudamodel.to(device)for epoch in range(num_epochs): # loop over the dataset multiple timesrunning_loss 0.0running_acc 0.0for i, (images, labels) in enumerate(train_loader, 0):# get the inputsimages Variable(images.to(device))labels Variable(labels.to(device))# zero the parameter gradientsoptimizer.zero_grad()# predict classes using images from the training setoutputs model(images)# compute the loss based on model output and real labelsloss loss_fn(outputs, labels)# backpropagate the lossloss.backward()# adjust parameters based on the calculated gradientsoptimizer.step()# Lets print statistics for every 1,000 imagesrunning_loss loss.item() # extract the loss valueif i % 1000 999: # print every 1000 (twice per epoch) print([%d, %5d] loss: %.3f %(epoch 1, i 1, running_loss / 1000))# zero the lossrunning_loss 0.0# Compute and print the average accuracy fo this epoch when tested over all 10000 test imagesaccuracy testAccuracy()print(For epoch, epoch1,the test accuracy over the whole test set is %d %% % (accuracy))# we want to save the model if the accuracy is the bestif accuracy best_accuracy:saveModel()best_accuracy accuracy测试模型 import matplotlib.pyplot as plt import numpy as np# Function to show the images def imageshow(img):img img / 2 0.5 # unnormalizenpimg img.numpy()plt.imshow(np.transpose(npimg, (1, 2, 0)))plt.show()# Function to test the model with a batch of images and show the labels predictions def testBatch():# get batch of images from the test DataLoader images, labels next(iter(test_loader))# show all images as one image gridimageshow(torchvision.utils.make_grid(images))# Show the real labels on the screen print(Real labels: , .join(%5s % classes[labels[j]] for j in range(batch_size)))# Lets see what if the model identifiers the labels of those exampleoutputs model(images)# We got the probability for every 10 labels. The highest (max) probability should be correct label_, predicted torch.max(outputs, 1)# Lets show the predicted labels on the screen to compare with the real onesprint(Predicted: , .join(%5s % classes[predicted[j]] for j in range(batch_size)))执行模型 if __name__ __main__:# Lets build our modeltrain(5)print(Finished Training)# Test which classes performed welltestAccuracy()# Lets load the model we just created and test the accuracy per labelmodel Network()path myFirstModel.pthmodel.load_state_dict(torch.load(path))# Test with batch of imagestestBatch()总结 pytorch 搭建一个 CNN 模型比较简单5 轮训练之后效果就可以达到 60%10 张图片中预测对了 6 张。
http://www.ho-use.cn/article/10815353.html

相关文章:

  • 养老网站建设方案wordpress首页分类调用
  • 预约营销型网站建设专家易趣网网站建设与维护
  • 成都极客联盟网站建设公司wordpress 收购
  • 南京定制网站建设南阳网站建设页面
  • 英文网站制作++官网深圳最好的网站开发公司电话
  • 产品设计专业最好的大学网站seo问题
  • 网站建设柒金手指花总12php网站开发缓存的运用
  • 企业如何推广网站企业案例网站生成
  • 建网站的公司大全做网站需要几个人
  • 有源码如何搭建网站网页设计居中代码
  • 上海做网站较好的公司在线代理ip
  • 上海自助建站上海网站建设培训网页
  • 禹城做网站的深圳网络推广收费标准
  • 汽车网站开发与实现 论文wordpress 收费阅读
  • 口味王网站建设可行分析表网页美工设计培训学校
  • ps怎么做网站群晖安装wordpress
  • 企业建站系统营销吧tt团队苏州网站建设用哪种好
  • 中国移动网站建设网页模板源代码
  • 网站登陆怎么做wordpress设置ssl证书
  • 网站历史权重查询设计师学校有哪些
  • 张家界公司网站建设重庆seo推广公司
  • 简洁大气网站设计你会怎么做外国的网站
  • 电脑用虚拟机做网站安丘做网站
  • 做网站得花多钱西安网站建设资讯
  • 网站找建站公司ui界面素材
  • 厦门营销网站建设公司网站开发类优惠及服务承诺
  • 做网站最新技术商业网站图片
  • 新乡做网站的公司有那些个人如何建立免费手机网站
  • 做神秘顾客哪个网站好后台网站如何建设
  • 网站开发提问大连旅顺口旅游攻略