首页 > 解决方案 > 由于secure_file_priv,无法将表导出到输出文件

问题描述

我正在使用windows7和MySQL8.0。我尝试通过首先停止服务来编辑 my.ini。首先,如果我试图用secure_file_priv =“”替换my.ini,那就是拒绝访问。所以,我只是用'my1.ini'保存它然后删除my.ini'并再次将'my1.ini'重命名为'my.ini'。现在,当我尝试从管理工具>服务启动 MySQL80 服务时,我无法再次启动它。即使我已经从 CLI 客户端尝试过,但它引发了secure_file_priv的问题。我该怎么做?我已经能够使用 Scrapy 将抓取的数据存储到 MySQL 数据库中,但无法将其导出到我的项目目录。

#pipelines.py

from itemadapter import ItemAdapter
import mysql.connector

class QuotewebcrawlerPipeline(object):

    def __init__(self):
        self.create_connection()
        self.create_table()
        #self.dump_database()

    def create_connection(self):
        """
            This method will create the database connection & the cusror object
        """
        self.conn = mysql.connector.connect(host = 'localhost',
                                            user = 'root',
                                            passwd = 'Pxxxx',
                                            database = 'itemcontainer'
                                        )
        self.cursor = self.conn.cursor()
    
    def create_table(self):
        self.cursor.execute(""" DROP TABLE IF EXISTS my_table""")
        self.cursor.execute(""" CREATE TABLE my_table (
                                Quote text,
                                Author text,
                                Tag text)"""
                            )

    def process_item(self, item, spider):
        #print(item['quote'])
        self.store_db(item)
        return item

    def store_db(self,item):
        """
            This method is used to write the scraped data from item container into the database
        """
        #pass
        self.cursor.execute(""" INSERT INTO my_table VALUES(%s,%s,%s)""",(item['quote'][0],item['author'][0],
                                                                            item['tag'][0])
                            )
        self.conn.commit()
        #self.dump_database()

    # def dump_database(self):
    #     self.cursor.execute("""USE itemcontainer;SELECT * from my_table INTO OUTFILE 'quotes.txt'""",
    #                         multi = True
    #     )
    #     print("Data saved to output file")

#it​​em_container.py

import scrapy
from ..items import QuotewebcrawlerItem

class ItemContainer(scrapy.Spider):

name = 'itemcontainer'
start_urls = [
    "http://quotes.toscrape.com/"
]

def parse(self,response):

    items = QuotewebcrawlerItem()
    all_div_quotes = response.css("div.quote")
    for quotes in all_div_quotes:
        quote = quotes.css(".text::text").extract()
        author = quotes.css(".author::text").extract()
        tag = quotes.css(".tag::text").extract()

        items['quote'] = quote
        items['author'] = author
        items['tag'] = tag
        
        yield items

标签: mysqlscrapymysql-python

解决方案


推荐阅读