ImageVerifierCode 换一换
格式:DOCX , 页数:75 ,大小:860.64KB ,
资源ID:9342263      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/9342263.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(游戏拍卖行系统 学位论文.docx)为本站会员(b****8)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

游戏拍卖行系统 学位论文.docx

1、游戏拍卖行系统 学位论文大连交通大学信息工程学院毕业设计(论文)任务书题 目 游戏拍卖行系统任务及要求:1.设计(研究)内容和要求任务:1、 调查游戏拍卖行系统当前技术的发展近况,完成实习报告,字数不少于3000,第三周交给指导老师。2、 结合自己实习情况安排进度,填写进度计划表,第二周完成后交给指导老师签字,并严格执行。3、 按照软件工程思想,独立完成系统的设计和程序开发,完成代码估计2000行左右。4、 用JSP技术实现游戏拍卖行系统的功能。5、 程序简洁,算法可行,运行情况良好。要求:1、 每周和指导老师至少见面沟通一次,回报课题进展情况,接受老师询问。2、 接到任务书后,查阅与题目及专

2、业相关的外文资料进行翻译,要求不少于10000个外文字符,译出汉字不得少于3000,于第四周交给指导老师审阅。3、 毕业设计第13周完成毕业论文的装订,并由指导老师评阅。论文要求12000字以上,包括综述、系统总体设计、系统实现、性能分析、结论等。4、 教学第13周通过中软及教研室组织进行软件验收,验收时要提供软件使用说明书。5、 于第13周提出毕业答辩申请并签字。6、 第14 周答辩,要求制作PPT。2.原始依据通过大学几年的学习,已经学习了诸如软件工程、数据库原理及应用、数据结构、C+、Visual Basic、JAVA等多门程序设计语言和网络等基础知识和专业知识,学生有能力而且可以独立完

3、成小中型项目的设计与开发。学校现有设备和环境可以提供给学生实习和上机,而且具有专业老师可以指导学生。3.参考文献1 王诚梅.JSP案例开发集锦M.北京:电子工业出版社.20052 吴晓松.国际电子商务发展状况及我国应对策略J.云南财贸学院学报.20013 军征.闰众.电子商务应用与重构案例分析M.北京:高等教育出版社.20034 唐有明.JSP动态网站开发基础练习.典型案例M.北京:清华大学出版社.20065 陈兵.网络安全与电子商务M.北京:北京大学出版社.20026 池雅庆.JSP项目开发实践M.北京:中国铁道出版社.20067 黄明.JSP信息系统设计与开发实例M.上海:机械工业出版社.

4、20048 萨师煊.王珊.数据库系统概论M.北京:高等教育出版社.20009 陈旭东.刘迪仁编著.JSP 2.0应用教程M.北京:清华大学出版社.2006.6 10 叶乃沂.电子商务信息时代的管理与战略M.上海:上海交通大学出版社.200211 Juan Lipson Vuong.A semantics-based routing scheme for grid resource discoveryM.E-Science: First International Conference on E-Science and GridComputing.200512 Cay S .Horstmann.

5、 Gary Cornell美.Core JAVA 2 Volume 1 FundamentalsM.Pearson .Education.2005-01 指导教师签字:专业(方向)负责人签字: 2012年3月26日大连交通大学信息工程学院毕业设计(论文)进度计划与考核表学生姓名李青霖专业班级软件工程08-1班指导教师常敬岩史原本课题其他人员题目游戏拍卖行系统日期计划完成内容完成情况指导老师检查签字第1周完成任务书、提交进度表第2周完成调研报告、完成英文翻译第3周进行市场调查研究,需求分析第4周初步对系统进行分析设计第5周系统详细设计,进行编码第6周系统编码实施、完成论文初稿第7周完成系统编码,

6、进行调试第8周系调试统编码、提交论文初稿第9周完成系统编码调试、完善毕业论文第10周完成撰写毕业设计论文编写及代码测试第11周完成论文终稿、准备毕业论文打印、装订第12周提交毕业论文终稿及代码第13周提交毕业论文成果资料第14周毕业论文答辩指导教师签字: 年月日注:“计划完成内容”由学生本人认真填写,其它由指导教师考核时填写。大连交通大学信息工程学院毕业设计(论文)外文翻译学生姓名 李青霖 专业班级 软件工程08-1班 指导教师 常敬岩史原 职 称 高工 讲师 所在单位 信息科学系软件工程教研室 教研室主任 刘瑞杰 完成日期 2012 年 4 月 13 日A clustering method

7、 to distribute a database on a gridScienceDirect:Future Generation Computer Systems 23 (2007) 9971002Summary: Clusters and grids of workstations provide available resources for data mining processes. To exploit these resources, new distributed algorithms are necessary, particularly concerning the wa

8、y to distribute data and to use this partition. We present a clustering algorithm dubbed Progressive Clustering that provides an “intelligent” distribution of data on grids. The usefulness of this algorithm is shown for several distributed datamining tasks.Keywords: Grid and parallel computings; Dat

9、a mining; ClusteringIntroductionKnowledge discovery in databases, also called data mining, is a valuable engineering tool that serves to extract useful information from very large databases. This tool usually needs high computing capabilities that could be provided by parallelism and distribution. T

10、he work developed here is part of the DisDaMin project that deals with data mining issues (as association rules, clustering, . . . ) using distributed computing. DisDaMins aim is to develop parallel and distributed solutions for data mining problems. It achieves two gains in execution times: gain fr

11、om the use of parallelism and gain from decreased computation (by using an intelligent distribution of data and computation). In parallel and distributed environments such as grids or clusters, constraints inherent to the execution platform must be taken into account in algorithms. The non-existence

12、 of a central memory forces us to distribute the database into fragments and to handle these fragments using parallelism. Because of the high communication cost in this kind of environment, parallel computing must beas autonomous as possible to avoid costly communications (or at least synchronizatio

13、ns). However, existing grid data mining projects (e.g. Discovery Net, GridMiner, DMGA 7, or Knowledge Grid 11) provide mechanisms for integration and deployment of classical algorithms on grid, but not new grid-specific algorithms. On the other hand the DisDaMin project intends to tackle data mining

14、 tasks considering data mining specifics as well as grid computing specifics. For data mining problems, it is necessary to obtain an intelligent data partition, in order to compute more independent data fragments. The main problem is how to obtain this intelligent partition. For the association rule

15、s problem, for example, the main criterion for intelligent partition is that data rows within a fragment are as similar as possible (according to values for each attribute), while data rows between fragments are as dissimilar as possible. This criterion allows us to parallelize this problem which no

16、rmally needs to access the whole database. It allows us to decrease complexity (see 2). As this distribution criterion appears similar to the objective of clustering algorithms, the partition could be produced by a clustering treatment. The usefulness of the intelligent partition obtained from clust

17、ering for the association rules problem has already been studied (see 2). Clearly the clustering phase itself has to be distributed and needs to be fast in order not to slow down the global execution time. Clustering methods will be described before introducing the Distributed Progressive Clustering

18、 algorithm for execution on grid.Fig. 1. KMeans and agglomerative clustering principle.ClusteringClustering is the process of partitioning data into distinct groups (clusters) so that objects within a same cluster are similar, but dissimilar from objects in other clusters. Distinct clustering method

19、s could be separated considering two kinds of leading principles: hierarchical methods and partitioning ones.Hierarchical methods are composed of agglomerative ones (that initially consider a partition with clusters of a unique data instance and merge neighbouring clusters until a termination criter

20、ion is met) and divisive ones (that initially consider a partition with one cluster which contains all data instances and cut clusters iteratively until termination). Partitioning methods are composed by distance-based methods (as KMeans 8 for example), density-based methods or based on probabilitie

21、s. Other criteria permit us to distinguish between clustering methods (see 10); those methods based on membership degree of data instances to clusters (hard as cited before or fuzzy (see 4), and incremental methods for which data instances are considered when available instead of all at a time (see

22、5), method based on neighbourhood search (k-nearest neighbours). . . . Two well-known clustering algorithms are the partitioning KMeans (see 8) (which yields approximate results and has an acceptable time complexity), and agglomerative methods (see 12) (which yield relative good quality results, but

23、 are limited by time complexity).Principle of Kmeans: KMeans is an iterative algorithm that constructs an initial k-partition of data instances. An iterative relocation technique attempts to improve the partitioning by moving data from one group to another one until a termination criterion (see Fig.

24、 1, left part). KMeans will produce a local optimum result. Principle of agglomerative clustering: Hierarchical agglomerative clustering consists of a bottom-up approach to the problem that considers all data separately as clusters and merges two nearest clusters at each iteration until a terminatio

25、n condition (see Fig. 1, right part). This method uses a similarity measure matrix that makes the method unsuitable for huge datasets (because of the storage cost). Parallel algorithms: The two previous methods need to access the whole database or to communicate between each iteration in order to ob

26、tain a correct solution. Parallel methods exist for KMeans (see 3) and agglomerative clustering .Parallel versions also exist for other algorithms cited before (see 6). For parallel clustering to achieve the same quality clusters as under sequential clustering, a lot of communications is required. T

27、hose methods are suited to supercomputers as CC-NUMA or SMP, using a common memory and fast internal interconnection networks (Parallel Data Miner for IBM-SP3 for example). The huge number of communications in existing parallel methods yields performance problems in the context of grids. The classic

28、al methods need to be revisited to take into account the constraints of grid architectures (no common memory, slow communications). The Distributed Progressive Clustering (DPC) method presented in the next section considers these constraints.Fig. 2. Database B and associated matrix V.Progressive clu

29、steringThe distributed progressive clustering method deals with attributes in an incremental manner (this differs from existing incremental methods that deal with increasing number of data instances instead of increasing number of attributes in DPC). The method is suitable for distributed execution

30、using local computation to construct global results without synchronization. DPC is inspired by the sequential clustering algorithm called CLIQUE (see 1) that consists in clustering data by projections in each dimension, and by identifying dense clusters of data projections. The method assumes that

31、the whole database can be reached for projections. In the context of grid, it is assumed that the database is distributed by vertical splits (multibase). DPC works in a bottom up approach considering attributes of the database. It first computes clusters on vertical fragments containing few attributes and then combines these clusters to obtain clusters in higher dimensions. Both steps (i.e. the clustering of vertical fragments and the combination of these cluste

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1