Chilkat Examples

ChilkatHOME.NET Core C#Android™AutoItCC#C++Chilkat2-PythonCkPythonClassic ASPDataFlexDelphi ActiveXDelphi DLLGoJavaLianjaMono C#Node.jsObjective-CPHP ActiveXPHP ExtensionPerlPowerBuilderPowerShellPureBasicRubySQL ServerSwift 2Swift 3,4,5...TclUnicode CUnicode C++VB.NETVBScriptVisual Basic 6.0Visual FoxProXojo Plugin

Unicode C++ Examples

Web API Categories

ASN.1
AWS KMS
AWS Misc
Amazon EC2
Amazon Glacier
Amazon S3
Amazon S3 (new)
Amazon SES
Amazon SNS
Amazon SQS
Async
Azure Cloud Storage
Azure Key Vault
Azure Service Bus
Azure Table Service
Base64
Bounced Email
Box
CAdES
CSR
CSV
Certificates
Cloud Signature CSC
Code Signing
Compression
DKIM / DomainKey
DNS
DSA
Diffie-Hellman
Digital Signatures
Dropbox
Dynamics CRM
EBICS
ECC
Ed25519
Email Object
Encryption
FTP
FileAccess
Firebase
GMail REST API
GMail SMTP/IMAP/POP
Geolocation
Google APIs
Google Calendar
Google Cloud SQL
Google Cloud Storage
Google Drive
Google Photos
Google Sheets
Google Tasks
Gzip
HTML-to-XML/Text
HTTP

HTTP Misc
IMAP
JSON
JSON Web Encryption (JWE)
JSON Web Signatures (JWS)
JSON Web Token (JWT)
Java KeyStore (JKS)
MHT / HTML Email
MIME
MS Storage Providers
Microsoft Graph
Misc
NTLM
OAuth1
OAuth2
OIDC
Office365
OneDrive
OpenSSL
Outlook
Outlook Calendar
Outlook Contact
PDF Signatures
PEM
PFX/P12
PKCS11
POP3
PRNG
REST
REST Misc
RSA
SCP
SCard
SFTP
SMTP
SSH
SSH Key
SSH Tunnel
ScMinidriver
Secrets
SharePoint
SharePoint Online
Signing in the Cloud
Socket/SSL/TLS
Spider
Stream
Tar Archive
ULID/UUID
Upload
WebSocket
XAdES
XML
XML Digital Signatures
XMP
Zip
curl
uncategorized

 

 

 

(Unicode C++) A Simple Web Crawler

This demonstrates a very simple web crawler using the Chilkat Spider component.

Chilkat C/C++ Library Downloads

MS Visual C/C++

Linux/CentOS C/C++

Alpine Linux C/C++

MAC OS X C/C++

armhf/aarch64 C/C++

C++ Builder

iOS C/C++

Android C/C++

Solaris C/C++

MinGW C/C++

#include <CkSpiderW.h>
#include <CkStringArrayW.h>

void ChilkatSample(void)
    {
    CkSpiderW spider;

    CkStringArrayW seenDomains;
    CkStringArrayW seedUrls;

    seenDomains.put_Unique(true);
    seedUrls.put_Unique(true);

    // You will need to change the start URL to something else...
    seedUrls.Append(L"http://something.whateverYouWant.com/");

    // Set outbound URL exclude patterns
    // URLs matching any of these patterns will not be added to the 
    // collection of outbound links.
    spider.AddAvoidOutboundLinkPattern(L"*?id=*");
    spider.AddAvoidOutboundLinkPattern(L"*.mypages.*");
    spider.AddAvoidOutboundLinkPattern(L"*.personal.*");
    spider.AddAvoidOutboundLinkPattern(L"*.comcast.*");
    spider.AddAvoidOutboundLinkPattern(L"*.aol.*");
    spider.AddAvoidOutboundLinkPattern(L"*~*");

    // Use a cache so we don't have to re-fetch URLs previously fetched.
    spider.put_CacheDir(L"c:/spiderCache/");
    spider.put_FetchFromCache(true);
    spider.put_UpdateCache(true);

    while (seedUrls.get_Count() > 0) {

        const wchar_t *url = seedUrls.pop();
        spider.Initialize(url);

        // Spider 5 URLs of this domain.
        // but first, save the base domain in seenDomains
        const wchar_t *domain = spider.getUrlDomain(url);
        seenDomains.Append(spider.getBaseDomain(domain));

        int i;
        bool success;
        for (i = 0; i <= 4; i++) {
            success = spider.CrawlNext();
            if (success == true) {

                // Display the URL we just crawled.
                wprintf(L"%s\n",spider.lastUrl());

                // If the last URL was retrieved from cache,
                // we won't wait.  Otherwise we'll wait 1 second
                // before fetching the next URL.
                if (spider.get_LastFromCache() != true) {
                    spider.SleepMs(1000);
                }

            }
            else {
                // cause the loop to exit..
                i = 999;
            }

        }

        // Add the outbound links to seedUrls, except
        // for the domains we've already seen.
        for (i = 0; i <= spider.get_NumOutboundLinks() - 1; i++) {

            url = spider.getOutboundLink(i);
            const wchar_t *domain = spider.getUrlDomain(url);
            const wchar_t *baseDomain = spider.getBaseDomain(domain);
            if (seenDomains.Contains(baseDomain) == false) {
                // Don't let our list of seedUrls grow too large.
                if (seedUrls.get_Count() < 1000) {
                    seedUrls.Append(url);
                }

            }

        }

    }
    }

 

© 2000-2024 Chilkat Software, Inc. All Rights Reserved.