Select Language





470 浏览
0 喜欢
1 次下载
0 条讨论
OCR/Text Detection Classification

ContextThis news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-200......

数据结构 ? 246.96M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。


    This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India.

    A majority of the data is focusing on Indian local news including national, city level and entertainment.

    The individual events can be explored in detail via the archives section on the agency website.

    Prepared by Rohit Kulkarni


    CSV Rows: 3,297,172

    1. publish_date: Date of the article being published online in yyyyMMdd format

    2. headline_category: Category of the headline, ascii, dot delimited, lowercase values

    3. headline_text: Text of the Headline in English, only ascii characters

    Start Date: 2001-01-01 End Date: 2020-06-30


    Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of English articles published per day. Due to the heavy daily volume (avg. 650 articles) over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time.

    It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets.

    • Time Range: Records during 2014 election, 2006 Mumbai bombings, 2020 Covid outbreak

    • One or more Categories: like Citywise, Bollywood, ICC updates, Magazine, Middle East

    • One or more Keywords: like crime or ecology related words, names of political parties, celebrities, corporations.


    The headlines are extracted from several GB of raw html files using Bash and Java.

    This logic also : chooses the best worded headline for each article (longest one is usually picked) ; clusters about 12k categories to 300 large groups ; removes records where the date is ambiguous ; finally cleans the selected headline via a string 'domestication' function.

    The final categories are as per the latest sitemap. Few hundred rare categories remain and these records can be filtered out easily during analysis. The category is unknown for ~200k records. There are no missing dates after the year 2001 and efforts have been made to preserve the order of posting.

    Similar news datasets exploring other attributes, countries and topics can be seen on my profile.




    • 分享你的想法


    所需积分:6 去赚积分?
    • 470浏览
    • 1下载
    • 0点赞
    • 收藏
    • 分享