Forum in maintenance, we will back soon π
Notifications
Clear all
Search result for: oops something went wrong
Page 5 / 5
Prev
# | Post Title | Result Info | Date | User | Forum |
RE: downloading the lesson at prompt engineering | 3 Relevance | 8 months ago | dimu | Prompt Engineering | |
streamlit, I also had an error about the path. I tried different things to solve the problem. And then I uninstalled Streamlit and re install it. that time I noticed I saw a warning massage, that said "The path is wrong and change the path." The first time When I installed Streamlit, I missed that warning message because of the hurry to do the testing. I saw some warning message shows me. but I didn`t read it first time and clear the terminal. because of that I missed the warning massage. Next time when I uninstall and reinstall the streamlit I did read that massage. so after that I change the path and dream coms true. I don't know this is related to your error. I just post this sometimes this might help someone like me who is new to coding. | |||||
RE: Implementing Youtube-Title-Generator-WP-UI in my AI tools. | 3 Relevance | 8 months ago | SSAdvisor | Online Tools Development | |
@sahu did you figure out what was wrong? | |||||
RE: 502 Bad Gateway | 3 Relevance | 8 months ago | SSAdvisor | WordPress | |
@kivicki this can happen if your redirect URL for the site is wrong or not set properly. I've had to modify the database to correct this issue before. You would need to modify the wp_options table where option_name equals "siteurl" and another where option_name equals "home". HTH | |||||
RE: Finding Content Ideas | 3 Relevance | 9 months ago | dimu | Online Business Strategies | |
Thank you Hasan,. So What I understand is one of the below methods. I just selected micro nich which is "build WordPress site". and using AI-Powered Keyword Research I found some low-difficult keywords like "build a new WordPress site while the old site is live", "how to build a WordPress site from scratch". I plan to do this for around 3 months. Still, I cannot afford your paid course. But once I earn some money to afford it, I'll start to learn your courses and try to create tools. but for now, I'll do like this. Please correct me if I am wrong, Thank you so much. | |||||
RE: Problem running Youtube transscript form Hasans newsletter | 3 Relevance | 9 months ago | stavros | Prompt Engineering | |
Can you guide me further? I already posted my terminal and all procedures. Where am I wrong? Thanks. | |||||
RE: Problem running Youtube transscript form Hasans newsletter | 3 Relevance | 9 months ago | stavros | Prompt Engineering | |
SSAdvisor thank you for the answer. I inform you that I repeated the procedure from the beginning with the same result.I made a new file and then I pasted the "plain text" in this new file. Ξs you can see in the open editor, again on the left after pasting the file, #import necessary libraries" appears and then I get the message to save the file somewhere, so I save it. Then I put the API and the video in the corresponding fields and run the file either by putting the command "python app.py" or with the back button on the rightthe result is always the same in the Terminal it shows a message after I run the file that says "ModuleNotFoundError: No module named 'youtube_transcript_api'" see the screenshot. study the screenshots to see what's Attachment : transcription_8.png wrong. Attachment : transcription_7.png Thank you very much | |||||
RE: Prompts To Generate Images Using AI Models Like Mid Journey And Bing AI | 3 Relevance | 9 months ago | ItzXAreeb | Online Tools Development | |
@admin Attachment : Prompt.txt Attachment : Ai Image Prompt Generator Tool.txt I tried these both files but I'm pretty sure I'm adding the wrong instructions somewhere. | |||||
RE: Backlinking with your Tolls Website | 3 Relevance | 10 months ago | Hasan Aboul Hasan | Online Business Strategies | |
@google-melissacollins sorry, what exactly is the problem? am open to listen to any feedback to improve. what is wrong with keyword research? | |||||
csv download is not including all the results in a django application | 3 Relevance | 1 year ago | Elda | Python Scripting | |
Hi, im trying to build a web app with python and django, in this application a user can enter a list of websites, the results will show the website status, the website title, the website summary, the email, the phone number, facebook and instagram links if present. At the end the user can download the results as csv, btw the csv is showing only 1 result, and even incomplete (the website, the phone, the email, fb and instagram are missing). What am i doing wrong? attached the base.html and result.html files and here is my views.py fileany idea how i can solve this? thanks! # website_checker/checker/views.py from django.shortcuts import render from django.http import HttpResponseRedirect from django.shortcuts import render, HttpResponse from django.urls import reverse import requests from bs4 import BeautifulSoup from .utils import get_business_summary, extract_emails, extract_phones, extract_social_media_links import spacy import re import csv import io # Function to get a business summary from the text def get_business_summary(text): nlp = spacy.load('en_core_web_sm') doc = nlp(text) sentences = [sent.text.strip() for sent in doc.sents] business_summary = '' for sent in sentences: # You can add more conditions to extract business-specific information from the text if 'business' in sent.lower() or 'company' in sent.lower(): business_summary = sent break return business_summary # Function to extract emails from the text def extract_emails(text): # Use regex pattern for email extraction email_pattern = r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}' emails = set(re.findall(email_pattern, text)) # Use set to eliminate duplicates return list(emails) # Function to extract phone numbers from the text def extract_phones(text): phone_pattern = re.compile(r'(\+?\d{1,3}[-.\s]?)?(\()?\d{3}(\))?[-.\s]?\d{3}[-.\s]?\d{4}') phones = set() for match in phone_pattern.finditer(text): phone = match.group(0).replace('(', '').replace(')', '').replace('-', '').replace(' ', '').replace('.', '') phones.add(phone) return list(phones) if phones else ['No phones found'] # Function to extract Facebook and Instagram links from the website def extract_social_media_links(soup): facebook_links = [] instagram_links = [] # Find all anchor tags with href attributes anchor_tags = soup.find_all('a', href=True) for tag in anchor_tags: href = tag['href'] if 'facebook.com' in href: facebook_links.append(href) elif 'instagram.com' in href: instagram_links.append(href) # Return the links as lists return facebook_links, instagram_links # Actual implementation of generate_csv function to convert websites_data into CSV format def generate_csv(websites_data): # Prepare CSV data csv_data = io.StringIO() # Create a StringIO object to hold CSV data fieldnames = ['Website', 'Status', 'Title', 'Description', 'Business Summary', 'Emails', 'Phones', 'Facebook', 'Instagram'] # Use DictWriter to write the CSV data writer = csv.DictWriter(csv_data, fieldnames=fieldnames) writer.writeheader() # Write the header row for data in websites_data: # Create a new dictionary with the required fieldnames to avoid extra fields in the CSV row_data = { 'Website': data['url'], 'Status': data['is_down'], 'Title': data['title'], 'Description': data['description'], 'Business Summary': data['business_summary'], 'Emails': ', '.join(data['emails']), 'Phones': ', '.join(data['phones']), 'Facebook': ', '.join(data['facebook_links']), 'Instagram': ', '.join(data['instagram_links']), } # Write the data row writer.writerow(row_data) return csv_data.getvalue() def download_csv(request): if request.method == 'POST': websites_data = request.session.get('websites_data') if websites_data: # Prepare CSV data csv_data = generate_csv(websites_data) # Create and return the CSV response response = HttpResponse(csv_data, content_type='text/csv') response['Content-Disposition'] = 'attachment; filename="websites_data.csv"' return response else: return HttpResponse("No data to download.") else: return HttpResponse("Invalid request method for CSV download.") # Combine the check_websites logic with the home view function def home(request): if request.method == 'POST': website_urls = request.POST.get('website_urls', '').strip() urls_list = website_urls.splitlines() # Remove empty strings from the list urls_list = list(filter(None, urls_list)) print("Request Method:", request.method) # Debugging line print("Website URLs:", urls_list) # Debugging line websites_data = [] for url in urls_list: try: response = requests.get(url) is_down = response.status_code != 200 soup = BeautifulSoup(response.content, 'html.parser') if soup: # Check if the title tag exists title = soup.title if title: title = title.string.strip() if title.string else 'No title available' else: title = 'No title available' # Check if the description meta tag exists description_tag = soup.find('meta', attrs={'name': 'description'}) description = description_tag['content'].strip() if description_tag else 'No description available' # Get the website content for NLP processing website_text = soup.get_text() # Get a brief business summary business_summary = get_business_summary(website_text) # Extract emails using regex pattern emails = extract_emails(website_text) # Extract phone numbers using regex pattern phones = extract_phones(website_text) # Extract Facebook and Instagram links from the website facebook_links, instagram_links = extract_social_media_links(soup) # Remove duplicates from Facebook and Instagram links facebook_links = list(set(facebook_links)) instagram_links = list(set(instagram_links)) else: is_down = True title = 'No title available' description = 'No description available' business_summary = 'Unable to retrieve website content.' emails = [] phones = [] facebook_links = [] instagram_links = [] except requests.exceptions.RequestException: is_down = True title = 'No title available' description = 'No description available' business_summary = 'Unable to retrieve website content.' emails = [] phones = [] facebook_links = [] instagram_links = [] pass # Check the status and set 'UP' or 'Down' accordingly status = 'UP' if not is_down else 'Down' websites_data.append({ 'url': url, 'is_down': is_down, 'title': title, 'description': description, 'business_summary': business_summary, 'emails': emails, 'phones': phones, 'facebook_links': facebook_links, 'instagram_links': instagram_links, 'status': status, }) # Check if the request is for CSV download if request.POST.get('download_csv'): # Save websites_data in the session request.session['websites_data'] = websites_data # Generate the URL for the download view using reverse download_url = reverse('download_csv') # Redirect to the download view return HttpResponseRedirect(download_url) # For normal POST request, render the result table print("Websites Data:", websites_data) # Debugging line return render(request, 'checker/home.html', {'websites_data': websites_data, 'status': status}) # For GET request, display the form to enter website URLs return render(request, 'checker/home.html') Attachment : base.html Attachment : result.html | |||||
RE: Course Homework. Step 11 of 53 in Python Scripting | 3 Relevance | 1 year ago | Geoff Keall | Python Scripting | |
Mostly syntax problems. After I fixed the brackets and braces as well as too many speech marks and wrong type it all worked. | |||||
RE: earthquake chatgpt file error | 3 Relevance | 1 year ago | Janos Antal | Python Scripting | |
@admin Hello Hasan, I'm facing the same error : "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0)I have debugged the code and it is raised in scaper.py on the line where is driver.get(url) The exact error message is: Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'. I was trying to search on Google, but there is nothing related to this error. I have no idea what's wrong because I followed your step exactly. I would appreciate any solution. Thanks in advance for your great support and help. | |||||
RE: ChatGPT Earthquake on Mac | 3 Relevance | 1 year ago | Mirad | Python Scripting | |
@kakil Hi, it seems you have aditional files than I have. Unfortunately, I am pure beginner in Python and I can't figure out what it's really wrong. Attachment : CapturaΜ de ecran din 2023-06-14 la 01.05.36.png Attachment : CapturaΜ de ecran din 2023-06-14 la 01.04.09.png | |||||
RE: hassan help me pls | 3 Relevance | 10 months ago | badr | Online Tools Development | |
@admin frist one Act as a professional writing assistant Write a [2000] word listicle blog post titled ["Demystifying Grading: An Easy Guide"] that is optimized for the keyword [easy-grading-calculator]. This exact keyword should appear at least [10] times in the post, including in at least [10] subheadings. Please format the post as HTML. Include a table of contents at the start of the post, after the introduction, linking to each item on the list (using jump links). second Act as a professional writing assistant. I will provide you with text and you will do the following: 1. Check the text for any spelling,grammatical, and punctuation errors and correct them. 2. Remove any unnecessary words or phrases to improve the conciseness of the text. 3. Provide an in-depth tone analysis of the text. Include this analysis beneath the corrected version of the input text. Make a thorough and comprehensive analysis of the tone. 4. Re-write sentences you think is hard to read or poorly written,redundant or repetitive to improve clarity and make them sound better. 5. Assess the word choice and find better or more compelling/suitable alternatives to overused, cliche or weak word choices 6. Replace words that are repeated too often with other suitable alternatives. 7. Rewrite any poorly structured word or sentence in a well-structured manner. 8. Ensure that the text does not waffle or ramble pointlessly. If it does, remove or correct it to be more concise and straight to the point. The text should get to the point and avoid fluff. 9. Remove or replace any filler words 10. Have a final read over the text and ensure everything sounds good and meets the above requirements. Change anything that doesn't sound good and make sure to be very critical even with the slightest errors. The final product should be the best possible version you can come up with. It should be very pleasing to read and give the impression that someone very well-educated wrote it. Ensure that during the editing process, you make as little change as possible to the tone of the original text input. Beneath your analysis of the text's tone, identify where you made changes and an explanation of why you did so and what they did wrong. Make this as comprehensive and thorough as possible. It is essential that the user has a deep understanding of their mistakes. Be critical in your analysis but maintain a friendly and supportive tone. OUTPUT: Markdown format with #Headings, ##H2, ###H3, + bullet points, + sub-bullet points Here is the text to check: I use it to generate articles, then use the second claim to improve the article. then go to semrush seo writing assistant to get improve seo keywords Can it succeed, at least initially, until I become more proficient in English? | |||||
RE: Python Script crashes | 3 Relevance | 1 year ago | SSAdvisor | Python Scripting | |
@aerostan Yes, that is the correct file. I haven't tried Earthquake yet but I can read code. How are you executing this; in Visual Studio Code or in a command window? Is there anything else you changed? Why were you trying to attach the Report.py file? P.S.: I hate Windows OS. Don't get me wrong I've used it for a long time but programming for it is a nightmare IMO. Currently I'm using a Google Pixel Go with ChromeOS and using the Linux layer for my development efforts. | |||||
RE: Python Script crashes | 3 Relevance | 1 year ago | Aerostan | Python Scripting | |
Hi Hasan Any thoughts on the issues I'm facing? Or where I might have gone wrong? Thanks, Stan | |||||
Python Script crashes | 3 Relevance | 1 year ago | Aerostan | Python Scripting | |
Hi Guys I am having problems with the script. Everything downloaded ok but when I pushed 'any key to continue' the Command screen just disappears. I've included a screenshot, just before I pushed the button. Any thoughts on where I've gone wrong? Thanks, Stan Attachment : Python Script.jpg | |||||
RE: How to deploy a script on a website | 3 Relevance | 1 year ago | SSAdvisor | Online Tools Development | |
That depends on the script? Do you have a Github account you could upload the script to? Or maybe just upload the file here. | |||||
Determining token cost | 3 Relevance | 1 year ago | sIVARAM bandaru | Python Scripting | |
Calculating token cost is giving me result as none. Can you please point me to what I am doing wrong? Main file: import helpers token_count=3000 costs=helpers.estimate_input_cost_optimized("gpt-3.5-turbo-0613", token_count) print(f"Costs: {costs}") Helpers is written as import tiktoken import openai # Estimate cost def estimate_input_cost_optimized(model_name, token_count): model_cost_dict= { "gpt-3.5-turbo-0613": 0.0015, "gpt-3.5-turbo-16k-0613": 0.003, "gpt-4-0613": 0.03, "gpt-4-32k-0613": 0.06 } try: cost_per_1000_tokens = model_cost_dict[model_name] except KeyError: raise ValueError(f"The model '{model_name}' is not recognized.") estimated_cost=(token_count / 1000) * cost_per_1000_token Result after running the code: Costs: None | |||||
Web scrapping from Indeed.com | 3 Relevance | 1 year ago | sIVARAM bandaru | Python Scripting | |
Step 1: I did a job search on Indeed.com and used the link as below in puthin from bs4 import BeautifulSoup import requests url="https://www.indeed.com/jobs?q=IT+director&l=Remote&from=searchOnHP&vjk=84953521ad7c4774" req=requests.get(url) soup=BeautifulSoup(req.text,"html.parser") # Find all the job posts job_posts = soup.find_all('meta', name="description") # Print the title of each job post for job_post in job_posts: title = job_post.a.text print(title) Step 2: Verified the html code has the description as below. <meta http-equiv="content-type" content="text/html; charset=utf-8"> <meta name="description" content="1,349 IT Director jobs available in Remote on Indeed.com. Apply to Director of Information Technology, Director of Partnerships, Director of Analytics and more!"> <meta name="referrer" content="origin-when-cross-origin"> Step 3: Running the python code in step 1 is giving lots of errors as below line 507, in sendraise ConnectTimeout(e, request=request) What is that I am doing wrong? | |||||
RE: earthquake gpt giving this error | 3 Relevance | 1 year ago | vasmod | Python Scripting | |
Hi @admin Hasan, Amazing work on putting the script together!! Can I ask for a quick pointer. I seem to have a similar error, but I checked my API key and even generated a new one from Open AI When I run the script I get the following error: DevTools listening on ws://127.0.0.1:53335/devtools/browser/6cd89f8b-d590-43e5-a661-3f1fea25d3e1[0603/084920.348:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0)[0603/084920.357:INFO:CONSOLE(1)] "The Cross-Origin-Opener-Policy header has been ignored, because the URL's origin was untrustworthy. It was defined either in the final response or a redirect. Please deliver the response using the HTTPS protocol. You can also use the 'localhost' origin instead. See and .", source: (1)[0603/084922.677:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Unrecognized feature: 'ch-ua-form-factor'.", source: (0)[0603/084923.279:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0)[0603/084924.840:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Unrecognized feature: 'ch-ua-form-factor'.", source: (0)[0603/084925.861:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0)[0603/084927.893:INFO:CONSOLE(0)] "Error with Permissions-Policy header: Origin trial controlled feature not enabled: 'unload'.", source: (0) any ideas what could be wrong? |
Page 5 / 5
Prev